00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 351 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3016 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:26.377 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:26.390 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:26.403 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:26.403 > git config core.sparsecheckout # timeout=10 00:00:26.417 > git read-tree -mu HEAD # timeout=10 00:00:26.437 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:26.466 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:26.467 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:26.611 [Pipeline] Start of Pipeline 00:00:26.626 [Pipeline] library 00:00:26.628 Loading library shm_lib@master 00:00:26.628 Library shm_lib@master is cached. Copying from home. 00:00:26.645 [Pipeline] node 00:00:26.655 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:26.657 [Pipeline] { 00:00:26.669 [Pipeline] catchError 00:00:26.670 [Pipeline] { 00:00:26.684 [Pipeline] wrap 00:00:26.694 [Pipeline] { 00:00:26.702 [Pipeline] stage 00:00:26.704 [Pipeline] { (Prologue) 00:00:26.723 [Pipeline] echo 00:00:26.725 Node: VM-host-SM0 00:00:26.731 [Pipeline] cleanWs 00:00:26.741 [WS-CLEANUP] Deleting project workspace... 00:00:26.741 [WS-CLEANUP] Deferred wipeout is used... 00:00:26.748 [WS-CLEANUP] done 00:00:26.917 [Pipeline] setCustomBuildProperty 00:00:26.993 [Pipeline] nodesByLabel 00:00:26.994 Found a total of 1 nodes with the 'sorcerer' label 00:00:27.005 [Pipeline] httpRequest 00:00:27.010 HttpMethod: GET 00:00:27.010 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:27.011 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:27.032 Response Code: HTTP/1.1 200 OK 00:00:27.033 Success: Status code 200 is in the accepted range: 200,404 00:00:27.033 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:42.215 [Pipeline] sh 00:00:42.496 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:42.517 [Pipeline] httpRequest 00:00:42.521 HttpMethod: GET 00:00:42.522 URL: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:42.522 Sending request to url: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:42.532 Response Code: HTTP/1.1 200 OK 00:00:42.532 Success: Status code 200 is in the accepted range: 200,404 00:00:42.533 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:22.549 [Pipeline] sh 00:01:22.827 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:26.119 [Pipeline] sh 00:01:26.396 + git -C spdk log --oneline -n5 00:01:26.396 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:26.396 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:26.396 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:26.396 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:26.396 3b33f4333 test/nvme/cuse: Fix typo 00:01:26.417 [Pipeline] withCredentials 00:01:26.426 > git --version # timeout=10 00:01:26.438 > git --version # 'git version 2.39.2' 00:01:26.454 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.456 [Pipeline] { 00:01:26.465 [Pipeline] retry 00:01:26.466 [Pipeline] { 00:01:26.484 [Pipeline] sh 00:01:26.763 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:26.774 [Pipeline] } 00:01:26.797 [Pipeline] // retry 00:01:26.801 [Pipeline] } 00:01:26.815 [Pipeline] // withCredentials 00:01:26.824 [Pipeline] httpRequest 00:01:26.828 HttpMethod: GET 00:01:26.828 URL: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.829 Sending request to url: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.835 Response Code: HTTP/1.1 200 OK 00:01:26.835 Success: Status code 200 is in the accepted range: 200,404 00:01:26.836 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.493 [Pipeline] sh 00:01:39.772 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.158 [Pipeline] sh 00:01:41.490 + git -C dpdk log --oneline -n5 00:01:41.490 eeb0605f11 version: 23.11.0 00:01:41.490 238778122a doc: update release notes for 23.11 00:01:41.490 46aa6b3cfc doc: fix description of RSS features 00:01:41.490 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:41.490 7e421ae345 devtools: support skipping forbid rule check 00:01:41.507 [Pipeline] writeFile 00:01:41.523 [Pipeline] sh 00:01:41.803 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:41.815 [Pipeline] sh 00:01:42.095 + cat autorun-spdk.conf 00:01:42.095 SPDK_TEST_UNITTEST=1 00:01:42.095 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.095 SPDK_TEST_NVME=1 00:01:42.095 SPDK_TEST_BLOCKDEV=1 00:01:42.095 SPDK_RUN_ASAN=1 00:01:42.095 SPDK_RUN_UBSAN=1 00:01:42.095 SPDK_TEST_RAID5=1 00:01:42.095 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.095 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:42.095 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.102 RUN_NIGHTLY=1 00:01:42.103 [Pipeline] } 00:01:42.120 [Pipeline] // stage 00:01:42.135 [Pipeline] stage 00:01:42.138 [Pipeline] { (Run VM) 00:01:42.151 [Pipeline] sh 00:01:42.433 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:42.433 + echo 'Start stage prepare_nvme.sh' 00:01:42.433 Start stage prepare_nvme.sh 00:01:42.433 + [[ -n 3 ]] 00:01:42.433 + disk_prefix=ex3 00:01:42.433 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:42.433 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:42.433 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:42.433 ++ SPDK_TEST_UNITTEST=1 00:01:42.433 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.433 ++ SPDK_TEST_NVME=1 00:01:42.433 ++ SPDK_TEST_BLOCKDEV=1 00:01:42.433 ++ SPDK_RUN_ASAN=1 00:01:42.433 ++ SPDK_RUN_UBSAN=1 00:01:42.433 ++ SPDK_TEST_RAID5=1 00:01:42.433 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.433 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:42.433 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:42.433 ++ RUN_NIGHTLY=1 00:01:42.433 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:42.433 + nvme_files=() 00:01:42.433 + declare -A nvme_files 00:01:42.433 + backend_dir=/var/lib/libvirt/images/backends 00:01:42.433 + nvme_files['nvme.img']=5G 00:01:42.433 + nvme_files['nvme-cmb.img']=5G 00:01:42.433 + nvme_files['nvme-multi0.img']=4G 00:01:42.433 + nvme_files['nvme-multi1.img']=4G 00:01:42.433 + nvme_files['nvme-multi2.img']=4G 00:01:42.433 + nvme_files['nvme-openstack.img']=8G 00:01:42.433 + nvme_files['nvme-zns.img']=5G 00:01:42.433 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:42.433 + (( SPDK_TEST_FTL == 1 )) 00:01:42.433 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:42.433 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:42.433 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.433 + for nvme in "${!nvme_files[@]}" 00:01:42.433 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:42.693 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.693 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:42.693 + echo 'End stage prepare_nvme.sh' 00:01:42.693 End stage prepare_nvme.sh 00:01:42.703 [Pipeline] sh 00:01:42.979 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.979 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -H -a -v -f ubuntu2204 00:01:42.979 00:01:42.979 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:42.979 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:42.979 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:42.979 HELP=0 00:01:42.979 DRY_RUN=0 00:01:42.979 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img, 00:01:42.979 NVME_DISKS_TYPE=nvme, 00:01:42.979 NVME_AUTO_CREATE=0 00:01:42.979 NVME_DISKS_NAMESPACES=, 00:01:42.979 NVME_CMB=, 00:01:42.979 NVME_PMR=, 00:01:42.979 NVME_ZNS=, 00:01:42.979 NVME_MS=, 00:01:42.979 NVME_FDP=, 00:01:42.979 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:42.979 SPDK_VAGRANT_VMCPU=10 00:01:42.979 SPDK_VAGRANT_VMRAM=12288 00:01:42.979 SPDK_VAGRANT_PROVIDER=libvirt 00:01:42.979 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:42.979 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:42.979 SPDK_OPENSTACK_NETWORK=0 00:01:42.979 VAGRANT_PACKAGE_BOX=0 00:01:42.979 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:42.979 FORCE_DISTRO=true 00:01:42.979 VAGRANT_BOX_VERSION= 00:01:42.979 EXTRA_VAGRANTFILES= 00:01:42.979 NIC_MODEL=e1000 00:01:42.979 00:01:42.979 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:42.979 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:46.261 Bringing machine 'default' up with 'libvirt' provider... 00:01:46.827 ==> default: Creating image (snapshot of base box volume). 00:01:46.827 ==> default: Creating domain with the following settings... 00:01:46.827 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1714193056_dcc2f7fea5c9cd112735 00:01:46.827 ==> default: -- Domain type: kvm 00:01:46.827 ==> default: -- Cpus: 10 00:01:46.827 ==> default: -- Feature: acpi 00:01:46.827 ==> default: -- Feature: apic 00:01:46.827 ==> default: -- Feature: pae 00:01:46.827 ==> default: -- Memory: 12288M 00:01:46.827 ==> default: -- Memory Backing: hugepages: 00:01:46.827 ==> default: -- Management MAC: 00:01:46.827 ==> default: -- Loader: 00:01:46.827 ==> default: -- Nvram: 00:01:46.827 ==> default: -- Base box: spdk/ubuntu2204 00:01:46.827 ==> default: -- Storage pool: default 00:01:46.827 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1714193056_dcc2f7fea5c9cd112735.img (20G) 00:01:46.827 ==> default: -- Volume Cache: default 00:01:46.827 ==> default: -- Kernel: 00:01:46.827 ==> default: -- Initrd: 00:01:46.827 ==> default: -- Graphics Type: vnc 00:01:46.827 ==> default: -- Graphics Port: -1 00:01:46.827 ==> default: -- Graphics IP: 127.0.0.1 00:01:46.827 ==> default: -- Graphics Password: Not defined 00:01:46.827 ==> default: -- Video Type: cirrus 00:01:46.827 ==> default: -- Video VRAM: 9216 00:01:46.827 ==> default: -- Sound Type: 00:01:46.827 ==> default: -- Keymap: en-us 00:01:46.827 ==> default: -- TPM Path: 00:01:46.827 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:46.827 ==> default: -- Command line args: 00:01:46.827 ==> default: -> value=-device, 00:01:46.827 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:46.827 ==> default: -> value=-drive, 00:01:46.827 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:46.827 ==> default: -> value=-device, 00:01:46.827 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.085 ==> default: Creating shared folders metadata... 00:01:47.085 ==> default: Starting domain. 00:01:48.986 ==> default: Waiting for domain to get an IP address... 00:02:03.872 ==> default: Waiting for SSH to become available... 00:02:04.440 ==> default: Configuring and enabling network interfaces... 00:02:09.711 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.979 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:19.201 ==> default: Mounting SSHFS shared folder... 00:02:20.137 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:20.137 ==> default: Checking Mount.. 00:02:20.704 ==> default: Folder Successfully Mounted! 00:02:20.704 ==> default: Running provisioner: file... 00:02:20.962 default: ~/.gitconfig => .gitconfig 00:02:21.225 00:02:21.225 SUCCESS! 00:02:21.225 00:02:21.225 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:21.225 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.225 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:21.225 00:02:21.236 [Pipeline] } 00:02:21.256 [Pipeline] // stage 00:02:21.267 [Pipeline] dir 00:02:21.268 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:02:21.270 [Pipeline] { 00:02:21.286 [Pipeline] catchError 00:02:21.287 [Pipeline] { 00:02:21.302 [Pipeline] sh 00:02:21.585 + vagrant ssh-config --host vagrant 00:02:21.585 + sed -ne /^Host/,$p 00:02:21.585 + tee ssh_conf 00:02:24.869 Host vagrant 00:02:24.869 HostName 192.168.121.3 00:02:24.869 User vagrant 00:02:24.869 Port 22 00:02:24.869 UserKnownHostsFile /dev/null 00:02:24.869 StrictHostKeyChecking no 00:02:24.869 PasswordAuthentication no 00:02:24.869 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:24.869 IdentitiesOnly yes 00:02:24.869 LogLevel FATAL 00:02:24.869 ForwardAgent yes 00:02:24.869 ForwardX11 yes 00:02:24.869 00:02:24.882 [Pipeline] withEnv 00:02:24.884 [Pipeline] { 00:02:24.899 [Pipeline] sh 00:02:25.178 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.178 source /etc/os-release 00:02:25.178 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.178 # Minimal, systemd-like check. 00:02:25.178 if [[ -e /.dockerenv ]]; then 00:02:25.178 # Clear garbage from the node's name: 00:02:25.178 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.178 # $HOSTNAME is the actual container id 00:02:25.178 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.178 if mountpoint -q /etc/hostname; then 00:02:25.178 # We can assume this is a mount from a host where container is running, 00:02:25.178 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.178 container="$(< /etc/hostname) ($agent)" 00:02:25.178 else 00:02:25.178 # Fallback 00:02:25.178 container=$agent 00:02:25.178 fi 00:02:25.178 fi 00:02:25.178 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.178 00:02:25.448 [Pipeline] } 00:02:25.466 [Pipeline] // withEnv 00:02:25.474 [Pipeline] setCustomBuildProperty 00:02:25.489 [Pipeline] stage 00:02:25.491 [Pipeline] { (Tests) 00:02:25.509 [Pipeline] sh 00:02:25.790 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.063 [Pipeline] timeout 00:02:26.064 Timeout set to expire in 1 hr 0 min 00:02:26.066 [Pipeline] { 00:02:26.082 [Pipeline] sh 00:02:26.362 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:26.929 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:02:26.943 [Pipeline] sh 00:02:27.226 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.501 [Pipeline] sh 00:02:27.781 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.056 [Pipeline] sh 00:02:28.339 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:28.598 ++ readlink -f spdk_repo 00:02:28.598 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.598 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.598 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.598 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.598 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.598 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.598 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.598 + cd /home/vagrant/spdk_repo 00:02:28.598 + source /etc/os-release 00:02:28.598 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:28.598 ++ NAME=Ubuntu 00:02:28.598 ++ VERSION_ID=22.04 00:02:28.598 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:28.598 ++ VERSION_CODENAME=jammy 00:02:28.598 ++ ID=ubuntu 00:02:28.598 ++ ID_LIKE=debian 00:02:28.598 ++ HOME_URL=https://www.ubuntu.com/ 00:02:28.598 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:28.598 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:28.598 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:28.598 ++ UBUNTU_CODENAME=jammy 00:02:28.598 + uname -a 00:02:28.598 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:28.598 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.598 Hugepages 00:02:28.598 node hugesize free / total 00:02:28.598 node0 1048576kB 0 / 0 00:02:28.598 node0 2048kB 0 / 0 00:02:28.598 00:02:28.598 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.598 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.857 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.857 + rm -f /tmp/spdk-ld-path 00:02:28.857 + source autorun-spdk.conf 00:02:28.857 ++ SPDK_TEST_UNITTEST=1 00:02:28.857 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.857 ++ SPDK_TEST_NVME=1 00:02:28.857 ++ SPDK_TEST_BLOCKDEV=1 00:02:28.857 ++ SPDK_RUN_ASAN=1 00:02:28.857 ++ SPDK_RUN_UBSAN=1 00:02:28.857 ++ SPDK_TEST_RAID5=1 00:02:28.857 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:28.857 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.857 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.857 ++ RUN_NIGHTLY=1 00:02:28.857 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.857 + [[ -n '' ]] 00:02:28.857 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:28.857 + for M in /var/spdk/build-*-manifest.txt 00:02:28.857 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:28.857 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.857 + for M in /var/spdk/build-*-manifest.txt 00:02:28.857 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:28.857 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.857 ++ uname 00:02:28.857 + [[ Linux == \L\i\n\u\x ]] 00:02:28.857 + sudo dmesg -T 00:02:28.857 + sudo dmesg --clear 00:02:28.857 + dmesg_pid=3292 00:02:28.857 + sudo dmesg -Tw 00:02:28.857 + [[ Ubuntu == FreeBSD ]] 00:02:28.857 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.857 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.857 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:28.857 + [[ -x /usr/src/fio-static/fio ]] 00:02:28.857 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:28.857 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:28.857 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:28.857 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:28.857 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:28.857 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:28.857 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:28.857 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.857 Test configuration: 00:02:28.857 SPDK_TEST_UNITTEST=1 00:02:28.857 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.857 SPDK_TEST_NVME=1 00:02:28.857 SPDK_TEST_BLOCKDEV=1 00:02:28.857 SPDK_RUN_ASAN=1 00:02:28.857 SPDK_RUN_UBSAN=1 00:02:28.857 SPDK_TEST_RAID5=1 00:02:28.857 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:28.857 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.857 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.857 RUN_NIGHTLY=1 04:44:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:28.857 04:44:58 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.857 04:44:58 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.857 04:44:58 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.857 04:44:58 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:28.857 04:44:58 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:28.857 04:44:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:28.857 04:44:58 -- paths/export.sh@5 -- $ export PATH 00:02:28.857 04:44:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:28.857 04:44:58 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.857 04:44:58 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:28.857 04:44:58 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714193098.XXXXXX 00:02:28.857 04:44:58 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714193098.dlWal2 00:02:28.857 04:44:58 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:28.857 04:44:58 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:28.857 04:44:58 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:28.857 04:44:58 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:28.857 04:44:58 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:28.857 04:44:58 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.857 04:44:58 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:28.857 04:44:58 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:28.857 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.857 04:44:58 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:28.857 04:44:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.857 04:44:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.857 04:44:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:28.857 04:44:58 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.857 Sat Apr 27 04:44:58 UTC 2024 00:02:28.857 04:44:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.857 LTS-24-g36faa8c31 00:02:28.857 04:44:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:28.857 04:44:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:28.857 04:44:58 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:28.857 04:44:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:28.857 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.857 ************************************ 00:02:28.857 START TEST asan 00:02:28.857 ************************************ 00:02:28.857 using asan 00:02:28.857 04:44:58 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:28.857 00:02:28.857 real 0m0.000s 00:02:28.857 user 0m0.000s 00:02:28.857 sys 0m0.000s 00:02:28.857 04:44:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:28.857 ************************************ 00:02:28.857 END TEST asan 00:02:28.857 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.857 ************************************ 00:02:29.116 04:44:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.116 04:44:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.116 04:44:58 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:29.116 04:44:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.116 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.116 ************************************ 00:02:29.116 START TEST ubsan 00:02:29.116 ************************************ 00:02:29.116 using ubsan 00:02:29.116 04:44:58 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:29.116 00:02:29.116 real 0m0.000s 00:02:29.116 user 0m0.000s 00:02:29.116 sys 0m0.000s 00:02:29.116 04:44:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.116 ************************************ 00:02:29.116 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.116 END TEST ubsan 00:02:29.116 ************************************ 00:02:29.116 04:44:58 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:29.116 04:44:58 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:29.116 04:44:58 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:29.116 04:44:58 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:29.116 04:44:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.116 04:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.116 ************************************ 00:02:29.116 START TEST build_native_dpdk 00:02:29.116 ************************************ 00:02:29.116 04:44:58 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:29.116 04:44:58 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:29.116 04:44:58 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:29.116 04:44:58 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:29.116 04:44:58 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:29.116 04:44:58 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:29.116 04:44:58 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:29.116 04:44:58 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:29.116 04:44:58 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:29.116 04:44:58 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:29.116 04:44:58 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:29.116 04:44:58 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:29.116 04:44:58 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:29.116 04:44:58 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:02:29.116 04:44:58 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:02:29.116 04:44:58 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:29.116 04:44:58 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.116 04:44:58 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:29.116 04:44:58 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:29.117 04:44:58 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:29.117 eeb0605f11 version: 23.11.0 00:02:29.117 238778122a doc: update release notes for 23.11 00:02:29.117 46aa6b3cfc doc: fix description of RSS features 00:02:29.117 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:29.117 7e421ae345 devtools: support skipping forbid rule check 00:02:29.117 04:44:58 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:29.117 04:44:58 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:29.117 04:44:58 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:29.117 04:44:58 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:29.117 04:44:58 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:29.117 04:44:58 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:29.117 04:44:58 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:29.117 04:44:58 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:29.117 04:44:58 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:29.117 04:44:58 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:29.117 04:44:58 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:29.117 04:44:58 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:29.117 04:44:58 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:29.117 04:44:58 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:29.117 04:44:58 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:29.117 04:44:58 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:29.117 04:44:58 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:29.117 04:44:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.117 04:44:58 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:29.117 04:44:58 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:29.117 04:44:58 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:29.117 04:44:58 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:29.117 04:44:58 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:29.117 04:44:58 -- scripts/common.sh@343 -- $ case "$op" in 00:02:29.117 04:44:58 -- scripts/common.sh@344 -- $ : 1 00:02:29.117 04:44:58 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:29.117 04:44:58 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.117 04:44:58 -- scripts/common.sh@364 -- $ decimal 23 00:02:29.117 04:44:58 -- scripts/common.sh@352 -- $ local d=23 00:02:29.117 04:44:58 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:29.117 04:44:58 -- scripts/common.sh@354 -- $ echo 23 00:02:29.117 04:44:58 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:29.117 04:44:58 -- scripts/common.sh@365 -- $ decimal 21 00:02:29.117 04:44:58 -- scripts/common.sh@352 -- $ local d=21 00:02:29.117 04:44:58 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:29.117 04:44:58 -- scripts/common.sh@354 -- $ echo 21 00:02:29.117 04:44:58 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:29.117 04:44:58 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:29.117 04:44:58 -- scripts/common.sh@366 -- $ return 1 00:02:29.117 04:44:58 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:29.117 patching file config/rte_config.h 00:02:29.117 Hunk #1 succeeded at 60 (offset 1 line). 00:02:29.117 04:44:58 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:29.117 04:44:58 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:29.117 04:44:58 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:29.117 04:44:58 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:29.117 04:44:58 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.384 The Meson build system 00:02:34.384 Version: 1.4.0 00:02:34.384 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:34.384 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:34.384 Build type: native build 00:02:34.384 Program cat found: YES (/usr/bin/cat) 00:02:34.384 Project name: DPDK 00:02:34.384 Project version: 23.11.0 00:02:34.384 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:34.384 C linker for the host machine: gcc ld.bfd 2.38 00:02:34.384 Host machine cpu family: x86_64 00:02:34.384 Host machine cpu: x86_64 00:02:34.384 Message: ## Building in Developer Mode ## 00:02:34.384 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.384 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:34.384 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.384 Program python3 found: YES (/usr/bin/python3) 00:02:34.384 Program cat found: YES (/usr/bin/cat) 00:02:34.384 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:34.384 Compiler for C supports arguments -march=native: YES 00:02:34.384 Checking for size of "void *" : 8 00:02:34.384 Checking for size of "void *" : 8 (cached) 00:02:34.384 Library m found: YES 00:02:34.384 Library numa found: YES 00:02:34.384 Has header "numaif.h" : YES 00:02:34.384 Library fdt found: NO 00:02:34.384 Library execinfo found: NO 00:02:34.384 Has header "execinfo.h" : YES 00:02:34.384 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:34.384 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.384 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.384 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.384 Run-time dependency openssl found: YES 3.0.2 00:02:34.384 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:34.384 Library pcap found: NO 00:02:34.384 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.384 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.384 Compiler for C supports arguments -Wformat: YES 00:02:34.384 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:34.384 Compiler for C supports arguments -Wformat-security: YES 00:02:34.384 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.384 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.384 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.384 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.384 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.384 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.384 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.384 Compiler for C supports arguments -Wundef: YES 00:02:34.384 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.384 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.384 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.384 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.384 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.384 Program objdump found: YES (/usr/bin/objdump) 00:02:34.384 Compiler for C supports arguments -mavx512f: YES 00:02:34.384 Checking if "AVX512 checking" compiles: YES 00:02:34.384 Fetching value of define "__SSE4_2__" : 1 00:02:34.384 Fetching value of define "__AES__" : 1 00:02:34.384 Fetching value of define "__AVX__" : 1 00:02:34.384 Fetching value of define "__AVX2__" : 1 00:02:34.384 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.384 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.384 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.384 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.384 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.384 Fetching value of define "__PCLMUL__" : 1 00:02:34.384 Fetching value of define "__RDRND__" : 1 00:02:34.384 Fetching value of define "__RDSEED__" : 1 00:02:34.384 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.384 Fetching value of define "__znver1__" : (undefined) 00:02:34.384 Fetching value of define "__znver2__" : (undefined) 00:02:34.384 Fetching value of define "__znver3__" : (undefined) 00:02:34.384 Fetching value of define "__znver4__" : (undefined) 00:02:34.384 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.384 Message: lib/log: Defining dependency "log" 00:02:34.384 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.384 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.384 Checking for function "getentropy" : NO 00:02:34.384 Message: lib/eal: Defining dependency "eal" 00:02:34.384 Message: lib/ring: Defining dependency "ring" 00:02:34.384 Message: lib/rcu: Defining dependency "rcu" 00:02:34.384 Message: lib/mempool: Defining dependency "mempool" 00:02:34.384 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.384 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.384 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.384 Compiler for C supports arguments -mpclmul: YES 00:02:34.384 Compiler for C supports arguments -maes: YES 00:02:34.384 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.384 Compiler for C supports arguments -mavx512bw: YES 00:02:34.384 Compiler for C supports arguments -mavx512dq: YES 00:02:34.384 Compiler for C supports arguments -mavx512vl: YES 00:02:34.384 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.384 Compiler for C supports arguments -mavx2: YES 00:02:34.384 Compiler for C supports arguments -mavx: YES 00:02:34.384 Message: lib/net: Defining dependency "net" 00:02:34.384 Message: lib/meter: Defining dependency "meter" 00:02:34.384 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.384 Message: lib/pci: Defining dependency "pci" 00:02:34.384 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.384 Message: lib/metrics: Defining dependency "metrics" 00:02:34.384 Message: lib/hash: Defining dependency "hash" 00:02:34.384 Message: lib/timer: Defining dependency "timer" 00:02:34.384 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:34.384 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:34.384 Message: lib/acl: Defining dependency "acl" 00:02:34.384 Message: lib/bbdev: Defining dependency "bbdev" 00:02:34.384 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:34.384 Run-time dependency libelf found: YES 0.186 00:02:34.384 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:34.384 Message: lib/bpf: Defining dependency "bpf" 00:02:34.384 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:34.384 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.384 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.384 Message: lib/distributor: Defining dependency "distributor" 00:02:34.384 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.384 Message: lib/efd: Defining dependency "efd" 00:02:34.384 Message: lib/eventdev: Defining dependency "eventdev" 00:02:34.384 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:34.384 Message: lib/gpudev: Defining dependency "gpudev" 00:02:34.384 Message: lib/gro: Defining dependency "gro" 00:02:34.384 Message: lib/gso: Defining dependency "gso" 00:02:34.384 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:34.384 Message: lib/jobstats: Defining dependency "jobstats" 00:02:34.384 Message: lib/latencystats: Defining dependency "latencystats" 00:02:34.384 Message: lib/lpm: Defining dependency "lpm" 00:02:34.384 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:34.384 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:34.384 Message: lib/member: Defining dependency "member" 00:02:34.384 Message: lib/pcapng: Defining dependency "pcapng" 00:02:34.384 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.384 Message: lib/power: Defining dependency "power" 00:02:34.384 Message: lib/rawdev: Defining dependency "rawdev" 00:02:34.384 Message: lib/regexdev: Defining dependency "regexdev" 00:02:34.384 Message: lib/mldev: Defining dependency "mldev" 00:02:34.384 Message: lib/rib: Defining dependency "rib" 00:02:34.384 Message: lib/reorder: Defining dependency "reorder" 00:02:34.384 Message: lib/sched: Defining dependency "sched" 00:02:34.384 Message: lib/security: Defining dependency "security" 00:02:34.384 Message: lib/stack: Defining dependency "stack" 00:02:34.384 Has header "linux/userfaultfd.h" : YES 00:02:34.384 Has header "linux/vduse.h" : YES 00:02:34.384 Message: lib/vhost: Defining dependency "vhost" 00:02:34.384 Message: lib/ipsec: Defining dependency "ipsec" 00:02:34.384 Message: lib/pdcp: Defining dependency "pdcp" 00:02:34.384 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.384 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.384 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:34.385 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.385 Message: lib/fib: Defining dependency "fib" 00:02:34.385 Message: lib/port: Defining dependency "port" 00:02:34.385 Message: lib/pdump: Defining dependency "pdump" 00:02:34.385 Message: lib/table: Defining dependency "table" 00:02:34.385 Message: lib/pipeline: Defining dependency "pipeline" 00:02:34.385 Message: lib/graph: Defining dependency "graph" 00:02:34.385 Message: lib/node: Defining dependency "node" 00:02:35.761 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.761 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.761 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.761 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.761 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:35.761 Compiler for C supports arguments -Wno-unused-value: YES 00:02:35.761 Compiler for C supports arguments -Wno-format: YES 00:02:35.761 Compiler for C supports arguments -Wno-format-security: YES 00:02:35.761 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:35.761 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.761 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:35.761 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:35.761 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.761 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.761 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:35.761 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:35.761 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:35.761 Has header "sys/epoll.h" : YES 00:02:35.761 Program doxygen found: YES (/usr/bin/doxygen) 00:02:35.761 Configuring doxy-api-html.conf using configuration 00:02:35.761 Configuring doxy-api-man.conf using configuration 00:02:35.761 Program mandb found: YES (/usr/bin/mandb) 00:02:35.762 Program sphinx-build found: NO 00:02:35.762 Configuring rte_build_config.h using configuration 00:02:35.762 Message: 00:02:35.762 ================= 00:02:35.762 Applications Enabled 00:02:35.762 ================= 00:02:35.762 00:02:35.762 apps: 00:02:35.762 graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:35.762 test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, test-pmd, 00:02:35.762 test-regex, test-sad, test-security-perf, 00:02:35.762 00:02:35.762 Message: 00:02:35.762 ================= 00:02:35.762 Libraries Enabled 00:02:35.762 ================= 00:02:35.762 00:02:35.762 libs: 00:02:35.762 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.762 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:35.762 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:35.762 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:35.762 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:35.762 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:35.762 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:35.762 00:02:35.762 00:02:35.762 Message: 00:02:35.762 =============== 00:02:35.762 Drivers Enabled 00:02:35.762 =============== 00:02:35.762 00:02:35.762 common: 00:02:35.762 00:02:35.762 bus: 00:02:35.762 pci, vdev, 00:02:35.762 mempool: 00:02:35.762 ring, 00:02:35.762 dma: 00:02:35.762 00:02:35.762 net: 00:02:35.762 i40e, 00:02:35.762 raw: 00:02:35.762 00:02:35.762 crypto: 00:02:35.762 00:02:35.762 compress: 00:02:35.762 00:02:35.762 regex: 00:02:35.762 00:02:35.762 ml: 00:02:35.762 00:02:35.762 vdpa: 00:02:35.762 00:02:35.762 event: 00:02:35.762 00:02:35.762 baseband: 00:02:35.762 00:02:35.762 gpu: 00:02:35.762 00:02:35.762 00:02:35.762 Message: 00:02:35.762 ================= 00:02:35.762 Content Skipped 00:02:35.762 ================= 00:02:35.762 00:02:35.762 apps: 00:02:35.762 dumpcap: missing dependency, "libpcap" 00:02:35.762 00:02:35.762 libs: 00:02:35.762 00:02:35.762 drivers: 00:02:35.762 common/cpt: not in enabled drivers build config 00:02:35.762 common/dpaax: not in enabled drivers build config 00:02:35.762 common/iavf: not in enabled drivers build config 00:02:35.762 common/idpf: not in enabled drivers build config 00:02:35.762 common/mvep: not in enabled drivers build config 00:02:35.762 common/octeontx: not in enabled drivers build config 00:02:35.762 bus/auxiliary: not in enabled drivers build config 00:02:35.762 bus/cdx: not in enabled drivers build config 00:02:35.762 bus/dpaa: not in enabled drivers build config 00:02:35.762 bus/fslmc: not in enabled drivers build config 00:02:35.762 bus/ifpga: not in enabled drivers build config 00:02:35.762 bus/platform: not in enabled drivers build config 00:02:35.762 bus/vmbus: not in enabled drivers build config 00:02:35.762 common/cnxk: not in enabled drivers build config 00:02:35.762 common/mlx5: not in enabled drivers build config 00:02:35.762 common/nfp: not in enabled drivers build config 00:02:35.762 common/qat: not in enabled drivers build config 00:02:35.762 common/sfc_efx: not in enabled drivers build config 00:02:35.762 mempool/bucket: not in enabled drivers build config 00:02:35.762 mempool/cnxk: not in enabled drivers build config 00:02:35.762 mempool/dpaa: not in enabled drivers build config 00:02:35.762 mempool/dpaa2: not in enabled drivers build config 00:02:35.762 mempool/octeontx: not in enabled drivers build config 00:02:35.762 mempool/stack: not in enabled drivers build config 00:02:35.762 dma/cnxk: not in enabled drivers build config 00:02:35.762 dma/dpaa: not in enabled drivers build config 00:02:35.762 dma/dpaa2: not in enabled drivers build config 00:02:35.762 dma/hisilicon: not in enabled drivers build config 00:02:35.762 dma/idxd: not in enabled drivers build config 00:02:35.762 dma/ioat: not in enabled drivers build config 00:02:35.762 dma/skeleton: not in enabled drivers build config 00:02:35.762 net/af_packet: not in enabled drivers build config 00:02:35.762 net/af_xdp: not in enabled drivers build config 00:02:35.762 net/ark: not in enabled drivers build config 00:02:35.762 net/atlantic: not in enabled drivers build config 00:02:35.762 net/avp: not in enabled drivers build config 00:02:35.762 net/axgbe: not in enabled drivers build config 00:02:35.762 net/bnx2x: not in enabled drivers build config 00:02:35.762 net/bnxt: not in enabled drivers build config 00:02:35.762 net/bonding: not in enabled drivers build config 00:02:35.762 net/cnxk: not in enabled drivers build config 00:02:35.762 net/cpfl: not in enabled drivers build config 00:02:35.762 net/cxgbe: not in enabled drivers build config 00:02:35.762 net/dpaa: not in enabled drivers build config 00:02:35.762 net/dpaa2: not in enabled drivers build config 00:02:35.762 net/e1000: not in enabled drivers build config 00:02:35.762 net/ena: not in enabled drivers build config 00:02:35.762 net/enetc: not in enabled drivers build config 00:02:35.762 net/enetfec: not in enabled drivers build config 00:02:35.762 net/enic: not in enabled drivers build config 00:02:35.762 net/failsafe: not in enabled drivers build config 00:02:35.762 net/fm10k: not in enabled drivers build config 00:02:35.762 net/gve: not in enabled drivers build config 00:02:35.762 net/hinic: not in enabled drivers build config 00:02:35.762 net/hns3: not in enabled drivers build config 00:02:35.762 net/iavf: not in enabled drivers build config 00:02:35.762 net/ice: not in enabled drivers build config 00:02:35.762 net/idpf: not in enabled drivers build config 00:02:35.762 net/igc: not in enabled drivers build config 00:02:35.762 net/ionic: not in enabled drivers build config 00:02:35.762 net/ipn3ke: not in enabled drivers build config 00:02:35.762 net/ixgbe: not in enabled drivers build config 00:02:35.762 net/mana: not in enabled drivers build config 00:02:35.762 net/memif: not in enabled drivers build config 00:02:35.762 net/mlx4: not in enabled drivers build config 00:02:35.762 net/mlx5: not in enabled drivers build config 00:02:35.762 net/mvneta: not in enabled drivers build config 00:02:35.762 net/mvpp2: not in enabled drivers build config 00:02:35.762 net/netvsc: not in enabled drivers build config 00:02:35.762 net/nfb: not in enabled drivers build config 00:02:35.762 net/nfp: not in enabled drivers build config 00:02:35.762 net/ngbe: not in enabled drivers build config 00:02:35.762 net/null: not in enabled drivers build config 00:02:35.762 net/octeontx: not in enabled drivers build config 00:02:35.762 net/octeon_ep: not in enabled drivers build config 00:02:35.762 net/pcap: not in enabled drivers build config 00:02:35.762 net/pfe: not in enabled drivers build config 00:02:35.762 net/qede: not in enabled drivers build config 00:02:35.762 net/ring: not in enabled drivers build config 00:02:35.762 net/sfc: not in enabled drivers build config 00:02:35.762 net/softnic: not in enabled drivers build config 00:02:35.762 net/tap: not in enabled drivers build config 00:02:35.762 net/thunderx: not in enabled drivers build config 00:02:35.762 net/txgbe: not in enabled drivers build config 00:02:35.762 net/vdev_netvsc: not in enabled drivers build config 00:02:35.762 net/vhost: not in enabled drivers build config 00:02:35.762 net/virtio: not in enabled drivers build config 00:02:35.762 net/vmxnet3: not in enabled drivers build config 00:02:35.762 raw/cnxk_bphy: not in enabled drivers build config 00:02:35.762 raw/cnxk_gpio: not in enabled drivers build config 00:02:35.762 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:35.762 raw/ifpga: not in enabled drivers build config 00:02:35.762 raw/ntb: not in enabled drivers build config 00:02:35.762 raw/skeleton: not in enabled drivers build config 00:02:35.762 crypto/armv8: not in enabled drivers build config 00:02:35.762 crypto/bcmfs: not in enabled drivers build config 00:02:35.762 crypto/caam_jr: not in enabled drivers build config 00:02:35.762 crypto/ccp: not in enabled drivers build config 00:02:35.762 crypto/cnxk: not in enabled drivers build config 00:02:35.762 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.762 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.762 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.762 crypto/mlx5: not in enabled drivers build config 00:02:35.762 crypto/mvsam: not in enabled drivers build config 00:02:35.762 crypto/nitrox: not in enabled drivers build config 00:02:35.762 crypto/null: not in enabled drivers build config 00:02:35.762 crypto/octeontx: not in enabled drivers build config 00:02:35.762 crypto/openssl: not in enabled drivers build config 00:02:35.762 crypto/scheduler: not in enabled drivers build config 00:02:35.762 crypto/uadk: not in enabled drivers build config 00:02:35.762 crypto/virtio: not in enabled drivers build config 00:02:35.762 compress/isal: not in enabled drivers build config 00:02:35.762 compress/mlx5: not in enabled drivers build config 00:02:35.762 compress/octeontx: not in enabled drivers build config 00:02:35.762 compress/zlib: not in enabled drivers build config 00:02:35.762 regex/mlx5: not in enabled drivers build config 00:02:35.762 regex/cn9k: not in enabled drivers build config 00:02:35.762 ml/cnxk: not in enabled drivers build config 00:02:35.762 vdpa/ifc: not in enabled drivers build config 00:02:35.762 vdpa/mlx5: not in enabled drivers build config 00:02:35.762 vdpa/nfp: not in enabled drivers build config 00:02:35.762 vdpa/sfc: not in enabled drivers build config 00:02:35.762 event/cnxk: not in enabled drivers build config 00:02:35.762 event/dlb2: not in enabled drivers build config 00:02:35.762 event/dpaa: not in enabled drivers build config 00:02:35.762 event/dpaa2: not in enabled drivers build config 00:02:35.762 event/dsw: not in enabled drivers build config 00:02:35.762 event/opdl: not in enabled drivers build config 00:02:35.762 event/skeleton: not in enabled drivers build config 00:02:35.762 event/sw: not in enabled drivers build config 00:02:35.762 event/octeontx: not in enabled drivers build config 00:02:35.762 baseband/acc: not in enabled drivers build config 00:02:35.762 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:35.762 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:35.762 baseband/la12xx: not in enabled drivers build config 00:02:35.762 baseband/null: not in enabled drivers build config 00:02:35.762 baseband/turbo_sw: not in enabled drivers build config 00:02:35.762 gpu/cuda: not in enabled drivers build config 00:02:35.762 00:02:35.763 00:02:35.763 Build targets in project: 219 00:02:35.763 00:02:35.763 DPDK 23.11.0 00:02:35.763 00:02:35.763 User defined options 00:02:35.763 libdir : lib 00:02:35.763 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:35.763 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:35.763 c_link_args : 00:02:35.763 enable_docs : false 00:02:35.763 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:35.763 enable_kmods : false 00:02:35.763 machine : native 00:02:35.763 tests : false 00:02:35.763 00:02:35.763 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.763 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:35.763 04:45:05 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:35.763 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:36.021 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:36.021 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.021 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.021 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.021 [5/707] Linking static target lib/librte_kvargs.a 00:02:36.021 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.021 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.280 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.280 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.280 [10/707] Linking static target lib/librte_log.a 00:02:36.280 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.280 [12/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.280 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.280 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.539 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.539 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:36.539 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.819 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.819 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.819 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.819 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.819 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.819 [23/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.077 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.077 [25/707] Linking target lib/librte_log.so.24.0 00:02:37.077 [26/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.077 [27/707] Linking static target lib/librte_telemetry.a 00:02:37.077 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.077 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.336 [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.336 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.336 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.336 [33/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:37.336 [34/707] Linking target lib/librte_kvargs.so.24.0 00:02:37.336 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.594 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.594 [37/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:37.594 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.594 [39/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.594 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.594 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.594 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.594 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:37.594 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.594 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:37.853 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.853 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.853 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.112 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.112 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.112 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.112 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.112 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.112 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.112 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.371 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.371 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.371 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.371 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.371 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.371 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.371 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.371 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.371 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.630 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.630 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.630 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.630 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.888 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.888 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.888 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.888 [72/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.888 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.888 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.888 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.888 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.888 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.888 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.146 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.146 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.146 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.404 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.404 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.404 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.404 [85/707] Linking static target lib/librte_ring.a 00:02:39.405 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.663 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.663 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.663 [89/707] Linking static target lib/librte_eal.a 00:02:39.663 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.663 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.663 [92/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.921 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.921 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.921 [95/707] Linking static target lib/librte_mempool.a 00:02:39.921 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.921 [97/707] Linking static target lib/librte_rcu.a 00:02:40.179 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.179 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.179 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.179 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.179 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.179 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.179 [104/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.179 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.436 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.436 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.436 [108/707] Linking static target lib/librte_mbuf.a 00:02:40.436 [109/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.436 [110/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.436 [111/707] Linking static target lib/librte_meter.a 00:02:40.436 [112/707] Linking static target lib/librte_net.a 00:02:40.694 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.694 [114/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.694 [115/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.694 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.694 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.952 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.210 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.210 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.210 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.777 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:41.777 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.777 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.777 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.777 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.777 [127/707] Linking static target lib/librte_pci.a 00:02:41.777 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.777 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.777 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.036 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.036 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.036 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.036 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.036 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.036 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.036 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.036 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.036 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.036 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.295 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.295 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.295 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.295 [144/707] Linking static target lib/librte_cmdline.a 00:02:42.295 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.553 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:42.554 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:42.554 [148/707] Linking static target lib/librte_metrics.a 00:02:42.554 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.813 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.813 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.071 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.071 [153/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.071 [154/707] Linking static target lib/librte_timer.a 00:02:43.330 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.330 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.589 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.589 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.848 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:43.848 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.848 [161/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:44.160 [162/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.160 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.418 [164/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.418 [165/707] Linking static target lib/librte_bbdev.a 00:02:44.418 [166/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.418 [167/707] Linking static target lib/librte_hash.a 00:02:44.677 [168/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.677 [169/707] Linking static target lib/librte_bitratestats.a 00:02:44.677 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.677 [171/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.677 [172/707] Linking static target lib/librte_ethdev.a 00:02:44.677 [173/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.936 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:45.194 [175/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.194 [176/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.194 [177/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.194 [178/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:45.194 [179/707] Linking static target lib/acl/libavx2_tmp.a 00:02:45.453 [180/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.453 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.453 [182/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.453 [183/707] Linking static target lib/librte_cfgfile.a 00:02:45.453 [184/707] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:45.453 [185/707] Linking static target lib/acl/libavx512_tmp.a 00:02:45.453 [186/707] Linking static target lib/librte_acl.a 00:02:45.712 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.712 [188/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.712 [189/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.975 [190/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.975 [191/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.975 [192/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.975 [193/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.975 [194/707] Linking static target lib/librte_compressdev.a 00:02:46.235 [195/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.235 [196/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:46.235 [197/707] Linking static target lib/librte_bpf.a 00:02:46.235 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.494 [199/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.494 [200/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.494 [201/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.494 [202/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.752 [203/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.752 [204/707] Linking static target lib/librte_distributor.a 00:02:46.752 [205/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.009 [206/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.009 [207/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.009 [208/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.009 [209/707] Linking static target lib/librte_dmadev.a 00:02:47.267 [210/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.267 [211/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.525 [212/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.525 [213/707] Linking target lib/librte_eal.so.24.0 00:02:47.525 [214/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:47.525 [215/707] Linking target lib/librte_ring.so.24.0 00:02:47.783 [216/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.783 [217/707] Linking target lib/librte_meter.so.24.0 00:02:47.783 [218/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:47.783 [219/707] Linking target lib/librte_rcu.so.24.0 00:02:47.783 [220/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:47.783 [221/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:47.783 [222/707] Linking target lib/librte_mempool.so.24.0 00:02:47.783 [223/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:47.783 [224/707] Linking target lib/librte_pci.so.24.0 00:02:47.783 [225/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:47.784 [226/707] Linking target lib/librte_timer.so.24.0 00:02:47.784 [227/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:48.043 [228/707] Linking target lib/librte_acl.so.24.0 00:02:48.043 [229/707] Linking target lib/librte_cfgfile.so.24.0 00:02:48.043 [230/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:48.043 [231/707] Linking target lib/librte_dmadev.so.24.0 00:02:48.043 [232/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:48.043 [233/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:48.043 [234/707] Linking static target lib/librte_efd.a 00:02:48.043 [235/707] Linking target lib/librte_mbuf.so.24.0 00:02:48.043 [236/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:48.043 [237/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:48.043 [238/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.043 [239/707] Linking static target lib/librte_cryptodev.a 00:02:48.043 [240/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:48.301 [241/707] Linking target lib/librte_net.so.24.0 00:02:48.301 [242/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.301 [243/707] Linking target lib/librte_bbdev.so.24.0 00:02:48.301 [244/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.301 [245/707] Linking target lib/librte_distributor.so.24.0 00:02:48.301 [246/707] Linking target lib/librte_compressdev.so.24.0 00:02:48.301 [247/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:48.301 [248/707] Linking target lib/librte_cmdline.so.24.0 00:02:48.560 [249/707] Linking target lib/librte_hash.so.24.0 00:02:48.560 [250/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:48.560 [251/707] Linking target lib/librte_efd.so.24.0 00:02:48.560 [252/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:48.560 [253/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:48.818 [254/707] Linking static target lib/librte_dispatcher.a 00:02:48.818 [255/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.818 [256/707] Linking static target lib/librte_gpudev.a 00:02:49.076 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.076 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.076 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.076 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:49.334 [261/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.334 [262/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.592 [263/707] Linking target lib/librte_cryptodev.so.24.0 00:02:49.592 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.592 [265/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.592 [266/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.592 [267/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:49.592 [268/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.592 [269/707] Linking target lib/librte_gpudev.so.24.0 00:02:49.592 [270/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.850 [271/707] Linking static target lib/librte_gro.a 00:02:49.850 [272/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.850 [273/707] Linking static target lib/librte_eventdev.a 00:02:49.850 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.850 [275/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.108 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.108 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.108 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.108 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.108 [280/707] Linking static target lib/librte_gso.a 00:02:50.365 [281/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.365 [282/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.365 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.365 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.365 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.646 [286/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.646 [287/707] Linking static target lib/librte_jobstats.a 00:02:50.646 [288/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.646 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.646 [290/707] Linking target lib/librte_ethdev.so.24.0 00:02:50.646 [291/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:50.906 [292/707] Linking static target lib/librte_ip_frag.a 00:02:50.906 [293/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:50.906 [294/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:50.906 [295/707] Linking target lib/librte_metrics.so.24.0 00:02:50.906 [296/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.906 [297/707] Linking target lib/librte_bpf.so.24.0 00:02:50.906 [298/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.906 [299/707] Linking target lib/librte_gro.so.24.0 00:02:51.164 [300/707] Linking target lib/librte_gso.so.24.0 00:02:51.164 [301/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:51.164 [302/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.164 [303/707] Linking target lib/librte_jobstats.so.24.0 00:02:51.164 [304/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:51.164 [305/707] Linking target lib/librte_bitratestats.so.24.0 00:02:51.164 [306/707] Linking target lib/librte_ip_frag.so.24.0 00:02:51.164 [307/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.164 [308/707] Linking static target lib/librte_latencystats.a 00:02:51.164 [309/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:51.164 [310/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.164 [311/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:51.421 [312/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.421 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.421 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.421 [315/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:51.421 [316/707] Linking target lib/librte_latencystats.so.24.0 00:02:51.421 [317/707] Linking static target lib/librte_lpm.a 00:02:51.679 [318/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.679 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.937 [320/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.937 [321/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:51.937 [322/707] Linking target lib/librte_lpm.so.24.0 00:02:51.937 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.937 [324/707] Linking static target lib/librte_pcapng.a 00:02:51.937 [325/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:51.937 [326/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.194 [327/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.194 [328/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.194 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.194 [330/707] Linking target lib/librte_pcapng.so.24.0 00:02:52.452 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.452 [332/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.452 [333/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.452 [334/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:52.710 [335/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.710 [336/707] Linking target lib/librte_eventdev.so.24.0 00:02:52.710 [337/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.710 [338/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.710 [339/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:52.710 [340/707] Linking static target lib/librte_power.a 00:02:52.710 [341/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:52.710 [342/707] Linking static target lib/librte_rawdev.a 00:02:52.710 [343/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.710 [344/707] Linking static target lib/librte_member.a 00:02:52.968 [345/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:52.968 [346/707] Linking target lib/librte_dispatcher.so.24.0 00:02:52.968 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:52.968 [348/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.968 [349/707] Linking static target lib/librte_regexdev.a 00:02:52.968 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.225 [351/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.225 [352/707] Linking target lib/librte_member.so.24.0 00:02:53.225 [353/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:53.225 [354/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.482 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.482 [356/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:53.482 [357/707] Linking static target lib/librte_mldev.a 00:02:53.482 [358/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.482 [359/707] Linking target lib/librte_rawdev.so.24.0 00:02:53.482 [360/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.739 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.739 [362/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.739 [363/707] Linking static target lib/librte_reorder.a 00:02:53.739 [364/707] Linking target lib/librte_power.so.24.0 00:02:53.739 [365/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.739 [366/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.739 [367/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.739 [368/707] Linking target lib/librte_regexdev.so.24.0 00:02:53.996 [369/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.996 [370/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.996 [371/707] Linking static target lib/librte_rib.a 00:02:53.996 [372/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.996 [373/707] Linking target lib/librte_reorder.so.24.0 00:02:53.996 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.996 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.996 [376/707] Linking static target lib/librte_stack.a 00:02:53.996 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:54.253 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.253 [379/707] Linking static target lib/librte_security.a 00:02:54.253 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.253 [381/707] Linking target lib/librte_stack.so.24.0 00:02:54.253 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.509 [383/707] Linking target lib/librte_rib.so.24.0 00:02:54.509 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.509 [385/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:54.767 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.767 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.767 [388/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.767 [389/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.767 [390/707] Linking static target lib/librte_sched.a 00:02:54.767 [391/707] Linking target lib/librte_security.so.24.0 00:02:54.767 [392/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.767 [393/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:55.024 [394/707] Linking target lib/librte_mldev.so.24.0 00:02:55.282 [395/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.282 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.282 [397/707] Linking target lib/librte_sched.so.24.0 00:02:55.282 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.540 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:55.540 [400/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.540 [401/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.798 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.056 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.056 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:56.056 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.314 [406/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.314 [407/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:56.314 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:56.572 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:56.572 [410/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:56.572 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.572 [412/707] Linking static target lib/librte_ipsec.a 00:02:56.830 [413/707] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:56.830 [414/707] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:56.830 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:56.830 [416/707] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:56.830 [417/707] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:56.830 [418/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:57.088 [419/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.088 [420/707] Linking target lib/librte_ipsec.so.24.0 00:02:57.088 [421/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:57.345 [422/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:57.602 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:57.602 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:57.602 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:57.602 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:57.861 [427/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:57.861 [428/707] Linking static target lib/librte_fib.a 00:02:57.861 [429/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:57.861 [430/707] Linking static target lib/librte_pdcp.a 00:02:57.861 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:58.133 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:58.133 [433/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.133 [434/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.133 [435/707] Linking target lib/librte_fib.so.24.0 00:02:58.133 [436/707] Linking target lib/librte_pdcp.so.24.0 00:02:58.391 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:58.391 [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:58.648 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:58.648 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:58.648 [441/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:58.648 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:58.649 [443/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:59.214 [444/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:59.214 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:59.214 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:59.214 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:59.214 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:59.472 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:59.472 [450/707] Linking static target lib/librte_pdump.a 00:02:59.472 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:59.472 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:59.730 [453/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.730 [454/707] Linking target lib/librte_pdump.so.24.0 00:02:59.730 [455/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:59.988 [456/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:59.988 [457/707] Linking static target lib/librte_port.a 00:03:00.246 [458/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:00.246 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:00.246 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:00.246 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:00.246 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:00.504 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:00.504 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:00.761 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:00.761 [466/707] Linking static target lib/librte_table.a 00:03:00.761 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:01.019 [468/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.019 [469/707] Linking target lib/librte_port.so.24.0 00:03:01.019 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:01.278 [471/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:01.278 [472/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.536 [473/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:01.536 [474/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:01.832 [475/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.832 [476/707] Linking target lib/librte_table.so.24.0 00:03:01.832 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:02.091 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:02.091 [479/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:02.349 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:02.349 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:02.349 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:02.349 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:02.607 [484/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:02.865 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:02.865 [486/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:02.865 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:02.865 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:02.865 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:03.129 [490/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:03.130 [491/707] Linking static target lib/librte_graph.a 00:03:03.130 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:03.387 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:03.952 [494/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.952 [495/707] Linking target lib/librte_graph.so.24.0 00:03:03.952 [496/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:03.952 [497/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:03.952 [498/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:03.952 [499/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:04.209 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:04.209 [501/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:04.209 [502/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:04.209 [503/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:04.466 [504/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.466 [505/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:04.724 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:04.724 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.724 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.982 [509/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:04.982 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:04.982 [511/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.982 [512/707] Linking static target lib/librte_node.a 00:03:05.240 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.240 [514/707] Linking target lib/librte_node.so.24.0 00:03:05.240 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.240 [516/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.240 [517/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.498 [518/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.498 [519/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.498 [520/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.498 [521/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.498 [522/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.498 [523/707] Linking static target drivers/librte_bus_vdev.a 00:03:05.756 [524/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.756 [525/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.756 [526/707] Linking static target drivers/librte_bus_pci.a 00:03:05.756 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:05.756 [528/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.756 [529/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.756 [530/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.756 [531/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.756 [532/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.014 [533/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:06.014 [534/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.014 [535/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.014 [536/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:06.014 [537/707] Linking static target drivers/librte_mempool_ring.a 00:03:06.014 [538/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.014 [539/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:06.014 [540/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:06.272 [541/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.272 [542/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:06.272 [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:06.530 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.530 [545/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:06.788 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:07.045 [547/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:07.303 [548/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:07.561 [549/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:07.561 [550/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:07.820 [551/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:08.078 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.078 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:08.336 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:08.594 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:08.852 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:08.852 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:08.852 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:09.419 [559/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:09.419 [560/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:09.419 [561/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:09.419 [562/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:09.678 [563/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:09.939 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:10.197 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:10.197 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:10.197 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:10.456 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:10.456 [569/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:10.456 [570/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:10.714 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:10.714 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:10.714 [573/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:10.973 [574/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:10.973 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:11.231 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:11.231 [577/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:11.490 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:11.490 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:11.490 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:11.490 [581/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.490 [582/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:11.747 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:11.747 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.747 [585/707] Linking static target drivers/librte_net_i40e.a 00:03:12.004 [586/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:12.004 [587/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:12.004 [588/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:12.262 [589/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:12.521 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:12.521 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:12.779 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:12.779 [593/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.037 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:13.037 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:13.037 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:13.295 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:13.553 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:13.553 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:13.811 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:13.811 [601/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:14.107 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:14.107 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:14.107 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:14.365 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:14.365 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:14.623 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:14.623 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:14.623 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.623 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:14.880 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:15.138 [612/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:15.138 [613/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:15.395 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:15.395 [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:15.395 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:15.395 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:16.325 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:16.582 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:16.582 [620/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.582 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.839 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:17.096 [623/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:17.096 [624/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:17.096 [625/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:17.353 [626/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:17.353 [627/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.353 [628/707] Linking static target lib/librte_vhost.a 00:03:17.611 [629/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:17.611 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:17.611 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:17.611 [632/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:17.611 [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:17.870 [634/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.870 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:17.870 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:18.132 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:18.132 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:18.389 [639/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:18.389 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:18.389 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:18.647 [642/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:18.647 [643/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:18.905 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:18.905 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:18.905 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:19.162 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:19.162 [648/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.162 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:19.162 [650/707] Linking target lib/librte_vhost.so.24.0 00:03:19.162 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:19.418 [652/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:19.674 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:19.674 [654/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:19.674 [655/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:19.674 [656/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:20.238 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:20.238 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:20.496 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:20.496 [660/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:20.496 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:20.754 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:20.754 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:21.012 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:21.012 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:21.270 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:21.270 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:21.270 [668/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:21.270 [669/707] Linking static target lib/librte_pipeline.a 00:03:21.528 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:21.528 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:21.528 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:21.786 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:21.786 [674/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:22.044 [675/707] Linking target app/dpdk-graph 00:03:22.044 [676/707] Linking target app/dpdk-pdump 00:03:22.044 [677/707] Linking target app/dpdk-proc-info 00:03:22.302 [678/707] Linking target app/dpdk-test-acl 00:03:22.560 [679/707] Linking target app/dpdk-test-bbdev 00:03:22.560 [680/707] Linking target app/dpdk-test-cmdline 00:03:22.560 [681/707] Linking target app/dpdk-test-compress-perf 00:03:22.560 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:22.560 [683/707] Linking target app/dpdk-test-crypto-perf 00:03:22.818 [684/707] Linking target app/dpdk-test-dma-perf 00:03:22.818 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:22.818 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:23.076 [687/707] Linking target app/dpdk-test-eventdev 00:03:23.076 [688/707] Linking target app/dpdk-test-fib 00:03:23.076 [689/707] Linking target app/dpdk-test-flow-perf 00:03:23.334 [690/707] Linking target app/dpdk-test-gpudev 00:03:23.334 [691/707] Linking target app/dpdk-test-mldev 00:03:23.334 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:23.334 [693/707] Linking target app/dpdk-test-pipeline 00:03:23.591 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:23.848 [695/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:23.848 [696/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:23.848 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:23.848 [698/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:24.105 [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:24.389 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:24.389 [701/707] Linking target app/dpdk-test-sad 00:03:24.646 [702/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:24.646 [703/707] Linking target app/dpdk-test-regex 00:03:24.646 [704/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.646 [705/707] Linking target lib/librte_pipeline.so.24.0 00:03:24.903 [706/707] Linking target app/dpdk-testpmd 00:03:25.161 [707/707] Linking target app/dpdk-test-security-perf 00:03:25.161 04:45:54 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:25.161 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:25.161 [0/1] Installing files. 00:03:25.421 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.424 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.684 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.685 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.685 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.685 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.686 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.686 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.686 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.257 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.257 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.257 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.257 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.257 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.257 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.257 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.257 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.258 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.259 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.260 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.260 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:26.260 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:26.260 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:26.260 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:26.260 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:26.260 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:26.260 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:26.260 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:26.260 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:26.260 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:26.260 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:26.260 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:26.260 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:26.260 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:26.260 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:26.260 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:26.260 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:26.260 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:26.260 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:26.260 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:26.260 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:26.260 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:26.260 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:26.260 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:26.260 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:26.260 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:26.260 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:26.260 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:26.260 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:26.260 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:26.260 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:26.260 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:26.260 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:26.260 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:26.260 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:26.260 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:26.260 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:26.260 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:26.260 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:26.260 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:26.260 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:26.260 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:26.260 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:26.260 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:26.260 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:26.260 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:26.260 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:26.260 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:26.260 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:26.260 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:26.260 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:26.260 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:26.260 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:26.260 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:26.260 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:26.260 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:26.260 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:26.260 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:26.260 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:26.260 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:26.260 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:26.260 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:26.260 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:26.260 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:26.260 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:26.260 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:26.260 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:26.260 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:26.260 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:26.260 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:26.260 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:26.260 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:26.261 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:26.261 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:26.261 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:26.261 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:26.261 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:26.261 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:26.261 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:26.261 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:26.261 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:26.261 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:26.261 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:26.261 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:26.261 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:26.261 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:26.261 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:26.261 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:26.261 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:26.261 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:26.261 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:26.261 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:26.261 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:26.261 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:26.261 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:26.261 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:26.261 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:26.261 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:26.261 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:26.261 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:26.261 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:26.261 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:26.261 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:26.261 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:26.261 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:26.261 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:26.261 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:26.261 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:26.261 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:26.261 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:26.261 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:26.261 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:26.261 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:26.261 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:26.261 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:26.261 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:26.261 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:26.261 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:26.261 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:26.261 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:26.261 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:26.261 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:26.261 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:26.261 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:26.261 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:26.261 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:26.261 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:26.261 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:26.261 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:26.261 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:26.261 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:26.261 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:26.261 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:26.519 04:45:56 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:26.519 04:45:56 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:26.519 04:45:56 -- common/autobuild_common.sh@200 -- $ cat 00:03:26.519 04:45:56 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:26.519 00:03:26.519 real 0m57.369s 00:03:26.519 user 6m55.421s 00:03:26.519 sys 1m4.043s 00:03:26.519 04:45:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:26.519 04:45:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.519 ************************************ 00:03:26.519 END TEST build_native_dpdk 00:03:26.519 ************************************ 00:03:26.519 04:45:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:26.519 04:45:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:26.519 04:45:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:26.519 04:45:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:26.519 04:45:56 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:26.519 04:45:56 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:26.519 04:45:56 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:03:26.519 04:45:56 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:03:26.519 04:45:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:26.519 04:45:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.519 ************************************ 00:03:26.519 START TEST unittest_build 00:03:26.519 ************************************ 00:03:26.519 04:45:56 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:03:26.519 04:45:56 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:03:26.519 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:26.519 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.519 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:26.519 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:26.777 Using 'verbs' RDMA provider 00:03:42.253 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:54.510 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:54.510 Creating mk/config.mk...done. 00:03:54.510 Creating mk/cc.flags.mk...done. 00:03:54.510 Type 'make' to build. 00:03:54.510 04:46:22 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:54.510 make[1]: Nothing to be done for 'all'. 00:04:12.588 CC lib/ut/ut.o 00:04:12.588 CC lib/ut_mock/mock.o 00:04:12.588 CC lib/log/log.o 00:04:12.588 CC lib/log/log_flags.o 00:04:12.588 CC lib/log/log_deprecated.o 00:04:12.588 LIB libspdk_ut_mock.a 00:04:12.588 LIB libspdk_log.a 00:04:12.588 LIB libspdk_ut.a 00:04:12.588 CXX lib/trace_parser/trace.o 00:04:12.588 CC lib/util/base64.o 00:04:12.588 CC lib/util/bit_array.o 00:04:12.588 CC lib/util/cpuset.o 00:04:12.588 CC lib/util/crc16.o 00:04:12.588 CC lib/util/crc32.o 00:04:12.588 CC lib/ioat/ioat.o 00:04:12.588 CC lib/dma/dma.o 00:04:12.588 CC lib/util/crc32c.o 00:04:12.588 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.588 CC lib/util/crc32_ieee.o 00:04:12.588 CC lib/util/crc64.o 00:04:12.588 CC lib/vfio_user/host/vfio_user.o 00:04:12.588 CC lib/util/dif.o 00:04:12.588 LIB libspdk_dma.a 00:04:12.588 CC lib/util/fd.o 00:04:12.588 CC lib/util/file.o 00:04:12.588 CC lib/util/hexlify.o 00:04:12.588 CC lib/util/iov.o 00:04:12.588 CC lib/util/math.o 00:04:12.588 CC lib/util/pipe.o 00:04:12.588 LIB libspdk_ioat.a 00:04:12.588 CC lib/util/strerror_tls.o 00:04:12.588 CC lib/util/string.o 00:04:12.588 LIB libspdk_vfio_user.a 00:04:12.588 CC lib/util/uuid.o 00:04:12.588 CC lib/util/fd_group.o 00:04:12.588 CC lib/util/xor.o 00:04:12.588 CC lib/util/zipf.o 00:04:12.847 LIB libspdk_util.a 00:04:13.107 CC lib/rdma/rdma_verbs.o 00:04:13.107 CC lib/idxd/idxd.o 00:04:13.107 CC lib/rdma/common.o 00:04:13.107 CC lib/json/json_parse.o 00:04:13.107 CC lib/json/json_util.o 00:04:13.107 CC lib/env_dpdk/env.o 00:04:13.107 CC lib/idxd/idxd_user.o 00:04:13.107 CC lib/vmd/vmd.o 00:04:13.107 CC lib/conf/conf.o 00:04:13.107 LIB libspdk_trace_parser.a 00:04:13.366 CC lib/vmd/led.o 00:04:13.366 CC lib/env_dpdk/memory.o 00:04:13.366 CC lib/json/json_write.o 00:04:13.366 CC lib/env_dpdk/pci.o 00:04:13.366 LIB libspdk_conf.a 00:04:13.366 CC lib/env_dpdk/init.o 00:04:13.366 LIB libspdk_rdma.a 00:04:13.366 CC lib/env_dpdk/threads.o 00:04:13.366 CC lib/env_dpdk/pci_ioat.o 00:04:13.366 CC lib/env_dpdk/pci_virtio.o 00:04:13.623 CC lib/env_dpdk/pci_vmd.o 00:04:13.623 CC lib/env_dpdk/pci_idxd.o 00:04:13.623 CC lib/env_dpdk/pci_event.o 00:04:13.623 LIB libspdk_json.a 00:04:13.623 CC lib/env_dpdk/sigbus_handler.o 00:04:13.881 CC lib/env_dpdk/pci_dpdk.o 00:04:13.881 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.881 LIB libspdk_idxd.a 00:04:13.881 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.881 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.881 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.881 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.881 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:13.881 LIB libspdk_vmd.a 00:04:14.140 LIB libspdk_jsonrpc.a 00:04:14.398 CC lib/rpc/rpc.o 00:04:14.398 LIB libspdk_rpc.a 00:04:14.657 CC lib/trace/trace.o 00:04:14.657 CC lib/sock/sock.o 00:04:14.657 CC lib/sock/sock_rpc.o 00:04:14.657 CC lib/trace/trace_flags.o 00:04:14.657 CC lib/trace/trace_rpc.o 00:04:14.657 CC lib/notify/notify.o 00:04:14.657 CC lib/notify/notify_rpc.o 00:04:14.657 LIB libspdk_env_dpdk.a 00:04:14.915 LIB libspdk_notify.a 00:04:14.915 LIB libspdk_trace.a 00:04:15.174 CC lib/thread/thread.o 00:04:15.174 CC lib/thread/iobuf.o 00:04:15.174 LIB libspdk_sock.a 00:04:15.174 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:15.174 CC lib/nvme/nvme_fabric.o 00:04:15.174 CC lib/nvme/nvme_ctrlr.o 00:04:15.174 CC lib/nvme/nvme_ns_cmd.o 00:04:15.174 CC lib/nvme/nvme_pcie_common.o 00:04:15.174 CC lib/nvme/nvme_ns.o 00:04:15.174 CC lib/nvme/nvme_pcie.o 00:04:15.174 CC lib/nvme/nvme_qpair.o 00:04:15.432 CC lib/nvme/nvme.o 00:04:15.999 CC lib/nvme/nvme_quirks.o 00:04:15.999 CC lib/nvme/nvme_transport.o 00:04:15.999 CC lib/nvme/nvme_discovery.o 00:04:15.999 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.999 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:16.257 CC lib/nvme/nvme_tcp.o 00:04:16.257 CC lib/nvme/nvme_opal.o 00:04:16.257 CC lib/nvme/nvme_io_msg.o 00:04:16.257 CC lib/nvme/nvme_poll_group.o 00:04:16.516 CC lib/nvme/nvme_zns.o 00:04:16.516 CC lib/nvme/nvme_cuse.o 00:04:16.516 CC lib/nvme/nvme_vfio_user.o 00:04:16.516 CC lib/nvme/nvme_rdma.o 00:04:17.177 LIB libspdk_thread.a 00:04:17.177 CC lib/accel/accel.o 00:04:17.177 CC lib/accel/accel_rpc.o 00:04:17.177 CC lib/accel/accel_sw.o 00:04:17.177 CC lib/init/json_config.o 00:04:17.177 CC lib/blob/blobstore.o 00:04:17.177 CC lib/virtio/virtio.o 00:04:17.177 CC lib/virtio/virtio_vhost_user.o 00:04:17.459 CC lib/virtio/virtio_vfio_user.o 00:04:17.459 CC lib/init/subsystem.o 00:04:17.459 CC lib/virtio/virtio_pci.o 00:04:17.459 CC lib/blob/request.o 00:04:17.459 CC lib/init/subsystem_rpc.o 00:04:17.459 CC lib/blob/zeroes.o 00:04:17.717 CC lib/init/rpc.o 00:04:17.717 CC lib/blob/blob_bs_dev.o 00:04:17.717 LIB libspdk_virtio.a 00:04:17.717 LIB libspdk_init.a 00:04:17.975 CC lib/event/app.o 00:04:17.975 CC lib/event/reactor.o 00:04:17.975 CC lib/event/log_rpc.o 00:04:17.975 CC lib/event/app_rpc.o 00:04:17.975 CC lib/event/scheduler_static.o 00:04:17.975 LIB libspdk_nvme.a 00:04:18.233 LIB libspdk_accel.a 00:04:18.491 LIB libspdk_event.a 00:04:18.491 CC lib/bdev/bdev.o 00:04:18.491 CC lib/bdev/bdev_rpc.o 00:04:18.491 CC lib/bdev/part.o 00:04:18.491 CC lib/bdev/bdev_zone.o 00:04:18.491 CC lib/bdev/scsi_nvme.o 00:04:21.020 LIB libspdk_blob.a 00:04:21.020 CC lib/blobfs/blobfs.o 00:04:21.020 CC lib/blobfs/tree.o 00:04:21.020 CC lib/lvol/lvol.o 00:04:21.586 LIB libspdk_bdev.a 00:04:21.843 CC lib/scsi/dev.o 00:04:21.843 CC lib/scsi/lun.o 00:04:21.843 CC lib/scsi/scsi.o 00:04:21.843 CC lib/scsi/port.o 00:04:21.843 CC lib/scsi/scsi_bdev.o 00:04:21.843 CC lib/nbd/nbd.o 00:04:21.843 CC lib/nvmf/ctrlr.o 00:04:21.843 LIB libspdk_blobfs.a 00:04:21.843 CC lib/ftl/ftl_core.o 00:04:21.843 LIB libspdk_lvol.a 00:04:21.843 CC lib/ftl/ftl_init.o 00:04:21.843 CC lib/ftl/ftl_layout.o 00:04:22.101 CC lib/ftl/ftl_debug.o 00:04:22.101 CC lib/ftl/ftl_io.o 00:04:22.101 CC lib/ftl/ftl_sb.o 00:04:22.101 CC lib/nbd/nbd_rpc.o 00:04:22.101 CC lib/scsi/scsi_pr.o 00:04:22.360 CC lib/scsi/scsi_rpc.o 00:04:22.360 CC lib/scsi/task.o 00:04:22.360 CC lib/nvmf/ctrlr_discovery.o 00:04:22.360 CC lib/nvmf/ctrlr_bdev.o 00:04:22.360 CC lib/ftl/ftl_l2p.o 00:04:22.360 CC lib/ftl/ftl_l2p_flat.o 00:04:22.360 LIB libspdk_nbd.a 00:04:22.360 CC lib/ftl/ftl_nv_cache.o 00:04:22.360 CC lib/nvmf/subsystem.o 00:04:22.360 CC lib/nvmf/nvmf.o 00:04:22.360 CC lib/nvmf/nvmf_rpc.o 00:04:22.618 CC lib/nvmf/transport.o 00:04:22.618 LIB libspdk_scsi.a 00:04:22.618 CC lib/nvmf/tcp.o 00:04:22.618 CC lib/nvmf/rdma.o 00:04:22.876 CC lib/ftl/ftl_band.o 00:04:23.134 CC lib/ftl/ftl_band_ops.o 00:04:23.134 CC lib/ftl/ftl_writer.o 00:04:23.391 CC lib/ftl/ftl_rq.o 00:04:23.391 CC lib/ftl/ftl_reloc.o 00:04:23.391 CC lib/ftl/ftl_l2p_cache.o 00:04:23.391 CC lib/ftl/ftl_p2l.o 00:04:23.392 CC lib/ftl/mngt/ftl_mngt.o 00:04:23.649 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.649 CC lib/iscsi/conn.o 00:04:23.906 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.906 CC lib/vhost/vhost.o 00:04:23.906 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.907 CC lib/vhost/vhost_rpc.o 00:04:23.907 CC lib/vhost/vhost_scsi.o 00:04:23.907 CC lib/vhost/vhost_blk.o 00:04:23.907 CC lib/vhost/rte_vhost_user.o 00:04:24.164 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:24.164 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:24.421 CC lib/iscsi/init_grp.o 00:04:24.421 CC lib/iscsi/iscsi.o 00:04:24.421 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:24.421 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:24.421 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:24.421 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.679 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.679 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.679 CC lib/iscsi/md5.o 00:04:24.679 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.679 CC lib/ftl/utils/ftl_conf.o 00:04:24.679 CC lib/iscsi/param.o 00:04:24.679 CC lib/iscsi/portal_grp.o 00:04:24.937 CC lib/iscsi/tgt_node.o 00:04:24.937 CC lib/iscsi/iscsi_subsystem.o 00:04:24.937 CC lib/iscsi/iscsi_rpc.o 00:04:24.937 CC lib/ftl/utils/ftl_md.o 00:04:24.937 LIB libspdk_vhost.a 00:04:24.937 CC lib/ftl/utils/ftl_mempool.o 00:04:24.937 CC lib/ftl/utils/ftl_bitmap.o 00:04:25.194 CC lib/iscsi/task.o 00:04:25.194 CC lib/ftl/utils/ftl_property.o 00:04:25.194 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:25.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:25.452 LIB libspdk_nvmf.a 00:04:25.452 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:25.452 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:25.452 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:25.452 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:25.452 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:25.452 CC lib/ftl/base/ftl_base_dev.o 00:04:25.452 CC lib/ftl/base/ftl_base_bdev.o 00:04:25.452 CC lib/ftl/ftl_trace.o 00:04:25.710 LIB libspdk_ftl.a 00:04:25.968 LIB libspdk_iscsi.a 00:04:26.226 CC module/env_dpdk/env_dpdk_rpc.o 00:04:26.226 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:26.226 CC module/sock/posix/posix.o 00:04:26.226 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:26.226 CC module/accel/error/accel_error.o 00:04:26.226 CC module/accel/ioat/accel_ioat.o 00:04:26.226 CC module/accel/iaa/accel_iaa.o 00:04:26.226 CC module/accel/dsa/accel_dsa.o 00:04:26.226 CC module/blob/bdev/blob_bdev.o 00:04:26.226 CC module/scheduler/gscheduler/gscheduler.o 00:04:26.486 LIB libspdk_env_dpdk_rpc.a 00:04:26.486 CC module/accel/dsa/accel_dsa_rpc.o 00:04:26.486 LIB libspdk_scheduler_gscheduler.a 00:04:26.486 LIB libspdk_scheduler_dpdk_governor.a 00:04:26.486 CC module/accel/error/accel_error_rpc.o 00:04:26.486 CC module/accel/ioat/accel_ioat_rpc.o 00:04:26.486 CC module/accel/iaa/accel_iaa_rpc.o 00:04:26.486 LIB libspdk_scheduler_dynamic.a 00:04:26.486 LIB libspdk_accel_dsa.a 00:04:26.743 LIB libspdk_blob_bdev.a 00:04:26.743 LIB libspdk_accel_ioat.a 00:04:26.743 LIB libspdk_accel_error.a 00:04:26.743 LIB libspdk_accel_iaa.a 00:04:26.743 CC module/bdev/delay/vbdev_delay.o 00:04:26.743 CC module/bdev/gpt/gpt.o 00:04:26.743 CC module/bdev/null/bdev_null.o 00:04:26.743 CC module/bdev/lvol/vbdev_lvol.o 00:04:26.743 CC module/bdev/passthru/vbdev_passthru.o 00:04:26.743 CC module/bdev/error/vbdev_error.o 00:04:26.743 CC module/bdev/nvme/bdev_nvme.o 00:04:26.743 CC module/bdev/malloc/bdev_malloc.o 00:04:26.743 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.001 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.001 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.001 CC module/bdev/null/bdev_null_rpc.o 00:04:27.001 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.259 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.259 LIB libspdk_blobfs_bdev.a 00:04:27.259 LIB libspdk_sock_posix.a 00:04:27.259 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:27.259 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:27.259 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:27.259 LIB libspdk_bdev_null.a 00:04:27.259 LIB libspdk_bdev_error.a 00:04:27.259 LIB libspdk_bdev_gpt.a 00:04:27.259 CC module/bdev/raid/bdev_raid.o 00:04:27.259 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:27.259 LIB libspdk_bdev_passthru.a 00:04:27.259 LIB libspdk_bdev_delay.a 00:04:27.517 CC module/bdev/split/vbdev_split.o 00:04:27.517 LIB libspdk_bdev_malloc.a 00:04:27.517 CC module/bdev/split/vbdev_split_rpc.o 00:04:27.517 CC module/bdev/aio/bdev_aio.o 00:04:27.517 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.517 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.517 CC module/bdev/ftl/bdev_ftl.o 00:04:27.517 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:27.517 LIB libspdk_bdev_split.a 00:04:27.776 LIB libspdk_bdev_lvol.a 00:04:27.776 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.776 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.776 LIB libspdk_bdev_aio.a 00:04:27.776 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.776 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.776 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.776 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.776 LIB libspdk_bdev_ftl.a 00:04:27.776 CC module/bdev/raid/bdev_raid_rpc.o 00:04:27.776 LIB libspdk_bdev_zone_block.a 00:04:28.035 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.035 CC module/bdev/nvme/nvme_rpc.o 00:04:28.035 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.035 CC module/bdev/nvme/vbdev_opal.o 00:04:28.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.035 CC module/bdev/raid/raid0.o 00:04:28.035 LIB libspdk_bdev_iscsi.a 00:04:28.293 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.293 CC module/bdev/raid/raid1.o 00:04:28.293 CC module/bdev/raid/concat.o 00:04:28.293 CC module/bdev/raid/raid5f.o 00:04:28.293 LIB libspdk_bdev_virtio.a 00:04:28.860 LIB libspdk_bdev_raid.a 00:04:29.428 LIB libspdk_bdev_nvme.a 00:04:29.687 CC module/event/subsystems/sock/sock.o 00:04:29.687 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.687 CC module/event/subsystems/vmd/vmd.o 00:04:29.687 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.687 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.687 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.687 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.687 LIB libspdk_event_scheduler.a 00:04:29.687 LIB libspdk_event_vhost_blk.a 00:04:29.687 LIB libspdk_event_sock.a 00:04:29.687 LIB libspdk_event_vmd.a 00:04:29.687 LIB libspdk_event_iobuf.a 00:04:29.945 CC module/event/subsystems/accel/accel.o 00:04:30.204 LIB libspdk_event_accel.a 00:04:30.204 CC module/event/subsystems/bdev/bdev.o 00:04:30.462 LIB libspdk_event_bdev.a 00:04:30.462 CC module/event/subsystems/scsi/scsi.o 00:04:30.462 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.462 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.721 CC module/event/subsystems/nbd/nbd.o 00:04:30.721 LIB libspdk_event_scsi.a 00:04:30.721 LIB libspdk_event_nbd.a 00:04:30.980 LIB libspdk_event_nvmf.a 00:04:30.980 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.980 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.980 LIB libspdk_event_vhost_scsi.a 00:04:30.980 LIB libspdk_event_iscsi.a 00:04:31.240 CC app/trace_record/trace_record.o 00:04:31.240 CC app/spdk_lspci/spdk_lspci.o 00:04:31.240 CXX app/trace/trace.o 00:04:31.240 CC app/nvmf_tgt/nvmf_main.o 00:04:31.240 CC examples/accel/perf/accel_perf.o 00:04:31.240 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.240 CC app/spdk_tgt/spdk_tgt.o 00:04:31.240 CC test/accel/dif/dif.o 00:04:31.499 CC examples/blob/hello_world/hello_blob.o 00:04:31.499 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.499 LINK spdk_lspci 00:04:31.499 LINK nvmf_tgt 00:04:31.499 LINK iscsi_tgt 00:04:31.499 LINK spdk_tgt 00:04:31.499 LINK spdk_trace_record 00:04:31.758 LINK hello_blob 00:04:31.758 LINK hello_bdev 00:04:31.758 LINK spdk_trace 00:04:31.758 LINK accel_perf 00:04:31.758 LINK dif 00:04:32.325 CC app/spdk_nvme_perf/perf.o 00:04:32.325 CC examples/ioat/perf/perf.o 00:04:32.584 LINK ioat_perf 00:04:33.153 CC examples/ioat/verify/verify.o 00:04:33.153 LINK verify 00:04:33.153 CC examples/blob/cli/blobcli.o 00:04:33.471 LINK spdk_nvme_perf 00:04:33.471 CC examples/nvme/hello_world/hello_world.o 00:04:33.744 LINK hello_world 00:04:34.003 LINK blobcli 00:04:34.003 CC examples/nvme/reconnect/reconnect.o 00:04:34.262 LINK reconnect 00:04:34.829 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:35.397 CC examples/sock/hello_world/hello_sock.o 00:04:35.657 LINK nvme_manage 00:04:35.657 LINK hello_sock 00:04:35.917 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.176 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.176 CC examples/vmd/led/led.o 00:04:36.176 CC examples/nvme/arbitration/arbitration.o 00:04:36.176 CC examples/nvme/hotplug/hotplug.o 00:04:36.176 LINK lsvmd 00:04:36.176 LINK led 00:04:36.176 CC test/app/bdev_svc/bdev_svc.o 00:04:36.435 LINK hotplug 00:04:36.435 LINK bdev_svc 00:04:36.435 LINK arbitration 00:04:36.693 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.693 CC app/spdk_nvme_identify/identify.o 00:04:36.951 LINK bdevperf 00:04:36.951 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:36.951 LINK nvme_fuzz 00:04:37.210 LINK cmb_copy 00:04:37.469 CC examples/nvme/abort/abort.o 00:04:37.469 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.728 LINK spdk_nvme_identify 00:04:37.728 LINK pmr_persistence 00:04:37.728 CC examples/nvmf/nvmf/nvmf.o 00:04:37.728 CC examples/util/zipf/zipf.o 00:04:37.987 CC examples/thread/thread/thread_ex.o 00:04:37.987 LINK abort 00:04:37.987 LINK zipf 00:04:37.987 LINK nvmf 00:04:38.245 LINK thread 00:04:38.504 CC examples/idxd/perf/perf.o 00:04:38.504 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:38.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:38.762 CC app/spdk_nvme_discover/discovery_aer.o 00:04:38.762 LINK idxd_perf 00:04:38.762 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:39.020 LINK spdk_nvme_discover 00:04:39.278 CC test/app/histogram_perf/histogram_perf.o 00:04:39.278 LINK histogram_perf 00:04:39.278 LINK vhost_fuzz 00:04:39.536 CC test/app/jsoncat/jsoncat.o 00:04:39.794 CC test/app/stub/stub.o 00:04:39.794 LINK jsoncat 00:04:40.054 LINK stub 00:04:40.312 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.571 CC app/spdk_top/spdk_top.o 00:04:40.857 LINK interrupt_tgt 00:04:40.857 LINK iscsi_fuzz 00:04:40.857 CC app/vhost/vhost.o 00:04:41.139 CC app/spdk_dd/spdk_dd.o 00:04:41.139 CC app/fio/nvme/fio_plugin.o 00:04:41.139 CC test/bdev/bdevio/bdevio.o 00:04:41.139 LINK vhost 00:04:41.398 LINK spdk_dd 00:04:41.657 LINK bdevio 00:04:41.657 LINK spdk_top 00:04:41.657 LINK spdk_nvme 00:04:41.915 CC app/fio/bdev/fio_plugin.o 00:04:42.483 CC test/blobfs/mkfs/mkfs.o 00:04:42.483 LINK spdk_bdev 00:04:42.740 TEST_HEADER include/spdk/accel.h 00:04:42.740 TEST_HEADER include/spdk/accel_module.h 00:04:42.740 TEST_HEADER include/spdk/assert.h 00:04:42.740 TEST_HEADER include/spdk/barrier.h 00:04:42.740 TEST_HEADER include/spdk/base64.h 00:04:42.740 TEST_HEADER include/spdk/bdev.h 00:04:42.740 TEST_HEADER include/spdk/bdev_module.h 00:04:42.740 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.740 TEST_HEADER include/spdk/bit_array.h 00:04:42.740 TEST_HEADER include/spdk/bit_pool.h 00:04:42.740 TEST_HEADER include/spdk/blob.h 00:04:42.740 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.740 TEST_HEADER include/spdk/blobfs.h 00:04:42.740 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.740 TEST_HEADER include/spdk/conf.h 00:04:42.740 TEST_HEADER include/spdk/config.h 00:04:42.740 TEST_HEADER include/spdk/cpuset.h 00:04:42.740 TEST_HEADER include/spdk/crc16.h 00:04:42.740 TEST_HEADER include/spdk/crc32.h 00:04:42.740 TEST_HEADER include/spdk/crc64.h 00:04:42.740 TEST_HEADER include/spdk/dif.h 00:04:42.740 TEST_HEADER include/spdk/dma.h 00:04:42.740 TEST_HEADER include/spdk/endian.h 00:04:42.740 TEST_HEADER include/spdk/env.h 00:04:42.740 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.740 TEST_HEADER include/spdk/event.h 00:04:42.740 TEST_HEADER include/spdk/fd.h 00:04:42.740 TEST_HEADER include/spdk/fd_group.h 00:04:42.740 TEST_HEADER include/spdk/file.h 00:04:42.740 TEST_HEADER include/spdk/ftl.h 00:04:42.740 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.740 TEST_HEADER include/spdk/hexlify.h 00:04:42.740 TEST_HEADER include/spdk/histogram_data.h 00:04:42.740 TEST_HEADER include/spdk/idxd.h 00:04:42.740 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.740 TEST_HEADER include/spdk/init.h 00:04:42.740 TEST_HEADER include/spdk/ioat.h 00:04:42.740 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.740 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.740 TEST_HEADER include/spdk/json.h 00:04:42.740 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.740 TEST_HEADER include/spdk/likely.h 00:04:42.740 TEST_HEADER include/spdk/log.h 00:04:42.740 LINK mkfs 00:04:42.740 TEST_HEADER include/spdk/lvol.h 00:04:42.740 TEST_HEADER include/spdk/memory.h 00:04:42.740 TEST_HEADER include/spdk/mmio.h 00:04:42.740 TEST_HEADER include/spdk/nbd.h 00:04:42.740 TEST_HEADER include/spdk/notify.h 00:04:42.740 TEST_HEADER include/spdk/nvme.h 00:04:42.740 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.740 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.740 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.740 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.740 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.740 TEST_HEADER include/spdk/nvmf.h 00:04:42.740 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.740 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.740 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.740 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.740 TEST_HEADER include/spdk/opal.h 00:04:42.740 TEST_HEADER include/spdk/opal_spec.h 00:04:42.740 TEST_HEADER include/spdk/pci_ids.h 00:04:42.740 TEST_HEADER include/spdk/pipe.h 00:04:42.740 TEST_HEADER include/spdk/queue.h 00:04:42.740 TEST_HEADER include/spdk/reduce.h 00:04:42.740 TEST_HEADER include/spdk/rpc.h 00:04:42.740 TEST_HEADER include/spdk/scheduler.h 00:04:42.740 TEST_HEADER include/spdk/scsi.h 00:04:42.740 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.740 TEST_HEADER include/spdk/sock.h 00:04:42.740 TEST_HEADER include/spdk/stdinc.h 00:04:42.740 TEST_HEADER include/spdk/string.h 00:04:42.740 TEST_HEADER include/spdk/thread.h 00:04:42.740 TEST_HEADER include/spdk/trace.h 00:04:42.740 TEST_HEADER include/spdk/trace_parser.h 00:04:42.740 TEST_HEADER include/spdk/tree.h 00:04:42.740 TEST_HEADER include/spdk/ublk.h 00:04:42.740 TEST_HEADER include/spdk/util.h 00:04:42.740 TEST_HEADER include/spdk/uuid.h 00:04:42.740 TEST_HEADER include/spdk/version.h 00:04:42.740 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.998 CC test/dma/test_dma/test_dma.o 00:04:42.998 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.998 TEST_HEADER include/spdk/vhost.h 00:04:42.998 TEST_HEADER include/spdk/vmd.h 00:04:42.998 TEST_HEADER include/spdk/xor.h 00:04:42.998 TEST_HEADER include/spdk/zipf.h 00:04:42.998 CXX test/cpp_headers/accel.o 00:04:43.256 CXX test/cpp_headers/accel_module.o 00:04:43.514 LINK test_dma 00:04:43.514 CC test/env/mem_callbacks/mem_callbacks.o 00:04:43.514 CC test/event/event_perf/event_perf.o 00:04:43.514 CXX test/cpp_headers/assert.o 00:04:43.773 LINK event_perf 00:04:43.773 CXX test/cpp_headers/barrier.o 00:04:43.773 CC test/event/reactor/reactor.o 00:04:44.032 CXX test/cpp_headers/base64.o 00:04:44.032 LINK reactor 00:04:44.032 LINK mem_callbacks 00:04:44.032 CXX test/cpp_headers/bdev.o 00:04:44.291 CXX test/cpp_headers/bdev_module.o 00:04:44.549 CXX test/cpp_headers/bdev_zone.o 00:04:44.808 CC test/env/vtophys/vtophys.o 00:04:44.808 CXX test/cpp_headers/bit_array.o 00:04:45.066 LINK vtophys 00:04:45.066 CXX test/cpp_headers/bit_pool.o 00:04:45.325 CXX test/cpp_headers/blob.o 00:04:45.325 CC test/event/reactor_perf/reactor_perf.o 00:04:45.325 CC test/event/app_repeat/app_repeat.o 00:04:45.325 CXX test/cpp_headers/blob_bdev.o 00:04:45.583 LINK reactor_perf 00:04:45.842 CXX test/cpp_headers/blobfs.o 00:04:45.842 LINK app_repeat 00:04:46.100 CXX test/cpp_headers/blobfs_bdev.o 00:04:46.100 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:46.100 CXX test/cpp_headers/conf.o 00:04:46.358 LINK env_dpdk_post_init 00:04:46.358 CXX test/cpp_headers/config.o 00:04:46.358 CXX test/cpp_headers/cpuset.o 00:04:46.616 CC test/lvol/esnap/esnap.o 00:04:46.616 CXX test/cpp_headers/crc16.o 00:04:46.616 CXX test/cpp_headers/crc32.o 00:04:46.616 CC test/rpc_client/rpc_client_test.o 00:04:46.874 CC test/nvme/aer/aer.o 00:04:46.874 CXX test/cpp_headers/crc64.o 00:04:46.874 CC test/nvme/reset/reset.o 00:04:46.874 LINK rpc_client_test 00:04:46.874 CXX test/cpp_headers/dif.o 00:04:47.132 LINK aer 00:04:47.132 CXX test/cpp_headers/dma.o 00:04:47.132 LINK reset 00:04:47.132 CC test/nvme/sgl/sgl.o 00:04:47.395 CXX test/cpp_headers/endian.o 00:04:47.395 CC test/event/scheduler/scheduler.o 00:04:47.658 CXX test/cpp_headers/env.o 00:04:47.658 LINK sgl 00:04:47.658 CC test/nvme/e2edp/nvme_dp.o 00:04:47.658 CC test/thread/poller_perf/poller_perf.o 00:04:47.658 CXX test/cpp_headers/env_dpdk.o 00:04:47.658 LINK scheduler 00:04:47.658 CC test/env/memory/memory_ut.o 00:04:47.940 LINK poller_perf 00:04:47.940 CXX test/cpp_headers/event.o 00:04:47.940 LINK nvme_dp 00:04:47.940 CXX test/cpp_headers/fd.o 00:04:47.940 CXX test/cpp_headers/fd_group.o 00:04:48.198 CXX test/cpp_headers/file.o 00:04:48.457 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:48.457 CXX test/cpp_headers/ftl.o 00:04:48.457 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:48.457 CC test/nvme/overhead/overhead.o 00:04:48.715 LINK histogram_ut 00:04:48.715 LINK memory_ut 00:04:48.715 CXX test/cpp_headers/gpt_spec.o 00:04:48.715 CC test/thread/lock/spdk_lock.o 00:04:48.715 CXX test/cpp_headers/hexlify.o 00:04:48.715 CC test/nvme/err_injection/err_injection.o 00:04:48.715 LINK overhead 00:04:48.973 CXX test/cpp_headers/histogram_data.o 00:04:48.973 CC test/env/pci/pci_ut.o 00:04:48.973 LINK err_injection 00:04:48.973 CC test/nvme/startup/startup.o 00:04:48.973 CXX test/cpp_headers/idxd.o 00:04:49.232 CC test/nvme/reserve/reserve.o 00:04:49.232 LINK startup 00:04:49.232 CXX test/cpp_headers/idxd_spec.o 00:04:49.490 LINK reserve 00:04:49.490 CXX test/cpp_headers/init.o 00:04:49.490 LINK pci_ut 00:04:49.748 CXX test/cpp_headers/ioat.o 00:04:49.748 CXX test/cpp_headers/ioat_spec.o 00:04:50.007 CXX test/cpp_headers/iscsi_spec.o 00:04:50.007 CXX test/cpp_headers/json.o 00:04:50.007 CC test/nvme/simple_copy/simple_copy.o 00:04:50.265 CC test/nvme/connect_stress/connect_stress.o 00:04:50.265 CXX test/cpp_headers/jsonrpc.o 00:04:50.265 CC test/nvme/boot_partition/boot_partition.o 00:04:50.524 LINK simple_copy 00:04:50.524 LINK connect_stress 00:04:50.524 CXX test/cpp_headers/likely.o 00:04:50.783 CC test/nvme/compliance/nvme_compliance.o 00:04:50.783 LINK spdk_lock 00:04:50.783 CXX test/cpp_headers/log.o 00:04:50.783 LINK boot_partition 00:04:51.041 CXX test/cpp_headers/lvol.o 00:04:51.041 CC test/nvme/fused_ordering/fused_ordering.o 00:04:51.041 CXX test/cpp_headers/memory.o 00:04:51.041 LINK nvme_compliance 00:04:51.042 LINK accel_ut 00:04:51.300 LINK fused_ordering 00:04:51.559 CXX test/cpp_headers/mmio.o 00:04:51.559 CXX test/cpp_headers/nbd.o 00:04:51.559 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:51.559 CC test/nvme/fdp/fdp.o 00:04:51.559 CXX test/cpp_headers/notify.o 00:04:51.817 LINK doorbell_aers 00:04:51.817 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:51.817 CXX test/cpp_headers/nvme.o 00:04:51.817 CC test/nvme/cuse/cuse.o 00:04:52.076 LINK fdp 00:04:52.076 CXX test/cpp_headers/nvme_intel.o 00:04:52.076 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:52.076 CXX test/cpp_headers/nvme_ocssd.o 00:04:52.334 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:52.592 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:52.592 LINK esnap 00:04:52.592 CXX test/cpp_headers/nvme_spec.o 00:04:52.592 CXX test/cpp_headers/nvme_zns.o 00:04:52.851 CXX test/cpp_headers/nvmf.o 00:04:52.851 LINK tree_ut 00:04:52.851 LINK blob_bdev_ut 00:04:52.851 LINK cuse 00:04:52.851 CXX test/cpp_headers/nvmf_cmd.o 00:04:52.851 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:53.109 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:53.109 CC test/unit/lib/event/app.c/app_ut.o 00:04:53.109 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:53.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.109 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:53.109 CXX test/cpp_headers/nvmf_spec.o 00:04:53.367 CXX test/cpp_headers/nvmf_transport.o 00:04:53.367 LINK dma_ut 00:04:53.367 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:53.625 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:53.625 CXX test/cpp_headers/opal.o 00:04:53.625 CXX test/cpp_headers/opal_spec.o 00:04:53.625 LINK app_ut 00:04:53.625 CXX test/cpp_headers/pci_ids.o 00:04:53.625 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:53.883 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:53.883 CXX test/cpp_headers/pipe.o 00:04:53.883 CXX test/cpp_headers/queue.o 00:04:53.883 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:53.883 CXX test/cpp_headers/reduce.o 00:04:53.883 LINK ioat_ut 00:04:54.141 CXX test/cpp_headers/rpc.o 00:04:54.141 LINK reactor_ut 00:04:54.141 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:54.141 LINK init_grp_ut 00:04:54.398 CXX test/cpp_headers/scheduler.o 00:04:54.398 CXX test/cpp_headers/scsi.o 00:04:54.398 LINK blobfs_async_ut 00:04:54.398 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:54.656 LINK param_ut 00:04:54.656 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:54.656 CXX test/cpp_headers/scsi_spec.o 00:04:54.656 LINK conn_ut 00:04:54.915 LINK portal_grp_ut 00:04:54.915 CXX test/cpp_headers/sock.o 00:04:54.915 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:54.915 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:55.180 CXX test/cpp_headers/stdinc.o 00:04:55.180 CXX test/cpp_headers/string.o 00:04:55.180 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:55.180 LINK json_util_ut 00:04:55.439 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:55.439 CXX test/cpp_headers/thread.o 00:04:55.439 CXX test/cpp_headers/trace.o 00:04:56.005 CXX test/cpp_headers/trace_parser.o 00:04:56.005 LINK blobfs_bdev_ut 00:04:56.005 LINK tgt_node_ut 00:04:56.005 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:56.005 LINK json_write_ut 00:04:56.005 CXX test/cpp_headers/tree.o 00:04:56.005 CXX test/cpp_headers/ublk.o 00:04:56.263 CXX test/cpp_headers/util.o 00:04:56.263 LINK blobfs_sync_ut 00:04:56.263 CXX test/cpp_headers/uuid.o 00:04:56.521 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:56.521 CC test/unit/lib/log/log.c/log_ut.o 00:04:56.521 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:56.521 LINK jsonrpc_server_ut 00:04:56.521 CXX test/cpp_headers/version.o 00:04:56.521 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.521 LINK iscsi_ut 00:04:56.779 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.779 LINK log_ut 00:04:56.779 CXX test/cpp_headers/vhost.o 00:04:56.779 CXX test/cpp_headers/vmd.o 00:04:57.037 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:57.037 CXX test/cpp_headers/xor.o 00:04:57.037 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:57.037 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:57.037 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:57.295 CXX test/cpp_headers/zipf.o 00:04:57.295 LINK notify_ut 00:04:57.295 LINK bdev_ut 00:04:57.295 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:57.552 LINK dev_ut 00:04:57.552 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:57.810 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:57.810 LINK scsi_ut 00:04:57.810 LINK json_parse_ut 00:04:57.810 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:58.067 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:58.067 LINK lun_ut 00:04:58.067 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:58.325 LINK scsi_pr_ut 00:04:58.325 LINK lvol_ut 00:04:58.583 LINK nvme_ut 00:04:58.583 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:58.583 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:58.841 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:58.841 LINK scsi_bdev_ut 00:04:58.841 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:59.099 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:59.099 LINK iobuf_ut 00:04:59.357 LINK base64_ut 00:04:59.616 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:59.616 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:59.616 LINK sock_ut 00:04:59.874 LINK scsi_nvme_ut 00:04:59.874 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:59.874 LINK bit_array_ut 00:05:00.132 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:05:00.132 LINK part_ut 00:05:00.390 LINK cpuset_ut 00:05:00.390 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:05:00.390 LINK thread_ut 00:05:00.648 LINK pci_event_ut 00:05:00.648 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:05:00.955 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:05:00.955 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:05:00.955 LINK crc16_ut 00:05:00.955 LINK posix_ut 00:05:00.955 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:05:01.237 LINK subsystem_ut 00:05:01.237 LINK crc32_ieee_ut 00:05:01.237 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:05:01.529 LINK blob_ut 00:05:01.529 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:05:01.529 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:05:01.529 LINK gpt_ut 00:05:01.786 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:05:01.786 LINK tcp_ut 00:05:01.786 LINK crc32c_ut 00:05:01.786 LINK crc64_ut 00:05:02.043 LINK ctrlr_ut 00:05:02.043 CC test/unit/lib/util/dif.c/dif_ut.o 00:05:02.043 CC test/unit/lib/util/iov.c/iov_ut.o 00:05:02.300 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:05:02.300 CC test/unit/lib/util/math.c/math_ut.o 00:05:02.300 LINK nvme_ctrlr_ut 00:05:02.300 LINK iov_ut 00:05:02.300 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:05:02.558 LINK vbdev_lvol_ut 00:05:02.558 LINK math_ut 00:05:02.558 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:05:02.558 LINK nvme_ctrlr_ocssd_cmd_ut 00:05:02.816 CC test/unit/lib/util/string.c/string_ut.o 00:05:02.816 CC test/unit/lib/util/xor.c/xor_ut.o 00:05:02.816 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:05:02.816 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:05:02.816 LINK pipe_ut 00:05:03.075 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:05:03.075 LINK nvme_ctrlr_cmd_ut 00:05:03.075 LINK xor_ut 00:05:03.075 LINK string_ut 00:05:03.333 LINK dif_ut 00:05:03.333 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:05:03.333 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:05:03.333 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:05:03.333 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:05:03.590 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:05:03.848 LINK nvme_ns_ut 00:05:04.106 LINK nvme_quirks_ut 00:05:04.106 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:05:04.364 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:05:04.364 LINK nvme_poll_group_ut 00:05:04.622 LINK ctrlr_discovery_ut 00:05:04.622 LINK bdev_raid_sb_ut 00:05:04.622 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:05:04.622 LINK nvme_qpair_ut 00:05:04.881 LINK nvme_ns_cmd_ut 00:05:04.881 LINK nvme_ns_ocssd_cmd_ut 00:05:04.881 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:05:04.881 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:05:04.881 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:05:05.139 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:05:05.139 LINK nvme_pcie_ut 00:05:05.139 LINK bdev_raid_ut 00:05:05.139 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:05:05.397 LINK bdev_zone_ut 00:05:05.397 LINK concat_ut 00:05:05.656 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:05:05.656 LINK nvme_transport_ut 00:05:05.656 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:05:05.656 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:05:05.656 LINK subsystem_ut 00:05:05.914 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:05:05.914 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:05:05.914 LINK nvme_io_msg_ut 00:05:05.914 LINK rpc_ut 00:05:06.173 LINK idxd_user_ut 00:05:06.173 LINK ctrlr_bdev_ut 00:05:06.173 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:05:06.173 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:05:06.173 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:05:06.431 LINK bdev_ut 00:05:06.431 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:05:06.431 LINK raid1_ut 00:05:06.431 CC test/unit/lib/rdma/common.c/common_ut.o 00:05:06.431 LINK idxd_ut 00:05:07.035 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:05:07.035 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:05:07.035 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:05:07.035 LINK nvme_tcp_ut 00:05:07.035 LINK common_ut 00:05:07.035 LINK nvme_fabric_ut 00:05:07.035 LINK raid5f_ut 00:05:07.035 LINK ftl_l2p_ut 00:05:07.294 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:05:07.294 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:05:07.294 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:05:07.294 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:05:07.294 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:05:07.551 LINK nvme_pcie_common_ut 00:05:07.551 LINK vbdev_zone_block_ut 00:05:07.551 LINK nvme_opal_ut 00:05:07.551 LINK nvmf_ut 00:05:07.808 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:05:07.808 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:05:08.067 LINK ftl_io_ut 00:05:08.067 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:05:08.067 LINK ftl_bitmap_ut 00:05:08.067 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:05:08.326 LINK vhost_ut 00:05:08.326 LINK ftl_mempool_ut 00:05:08.326 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:05:08.326 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:05:08.584 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:05:08.584 LINK ftl_mngt_ut 00:05:08.584 LINK ftl_band_ut 00:05:08.842 LINK nvme_cuse_ut 00:05:09.408 LINK nvme_rdma_ut 00:05:09.666 LINK ftl_layout_upgrade_ut 00:05:09.666 LINK ftl_sb_ut 00:05:11.568 LINK transport_ut 00:05:11.568 LINK rdma_ut 00:05:12.135 LINK bdev_nvme_ut 00:05:12.395 ************************************ 00:05:12.395 END TEST unittest_build 00:05:12.395 ************************************ 00:05:12.395 00:05:12.395 real 1m45.951s 00:05:12.395 user 9m25.357s 00:05:12.395 sys 1m48.273s 00:05:12.395 04:47:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:12.395 04:47:42 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.395 04:47:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.395 04:47:42 -- nvmf/common.sh@7 -- # uname -s 00:05:12.395 04:47:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.395 04:47:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.395 04:47:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.395 04:47:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.395 04:47:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.395 04:47:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.395 04:47:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.395 04:47:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.395 04:47:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.395 04:47:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.395 04:47:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:318d9c73-8cef-4b3a-8523-debaa9ad996e 00:05:12.395 04:47:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=318d9c73-8cef-4b3a-8523-debaa9ad996e 00:05:12.395 04:47:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.395 04:47:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.395 04:47:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.395 04:47:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.395 04:47:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.395 04:47:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.395 04:47:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.395 04:47:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:12.395 04:47:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:12.395 04:47:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:12.395 04:47:42 -- paths/export.sh@5 -- # export PATH 00:05:12.395 04:47:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:12.395 04:47:42 -- nvmf/common.sh@46 -- # : 0 00:05:12.395 04:47:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:12.395 04:47:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:12.395 04:47:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:12.395 04:47:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.395 04:47:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.395 04:47:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:12.395 04:47:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:12.395 04:47:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:12.395 04:47:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.395 04:47:42 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.655 04:47:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.655 04:47:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:05:12.655 04:47:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.655 04:47:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.655 04:47:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.655 04:47:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:12.655 04:47:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:12.655 04:47:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:05:12.655 04:47:42 -- spdk/autotest.sh@48 -- # udevadm_pid=105324 00:05:12.655 04:47:42 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:05:12.655 04:47:42 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:12.655 04:47:42 -- spdk/autotest.sh@54 -- # echo 105329 00:05:12.655 04:47:42 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:12.655 04:47:42 -- spdk/autotest.sh@56 -- # echo 105330 00:05:12.655 04:47:42 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:12.655 04:47:42 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:12.655 04:47:42 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:12.655 04:47:42 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:12.655 04:47:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.655 04:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.655 04:47:42 -- spdk/autotest.sh@70 -- # create_test_list 00:05:12.655 04:47:42 -- common/autotest_common.sh@736 -- # xtrace_disable 00:05:12.655 04:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.655 04:47:42 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:12.655 04:47:42 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:12.655 04:47:42 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:12.655 04:47:42 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:12.655 04:47:42 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:12.655 04:47:42 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:12.655 04:47:42 -- common/autotest_common.sh@1440 -- # uname 00:05:12.655 04:47:42 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:05:12.655 04:47:42 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:12.655 04:47:42 -- common/autotest_common.sh@1460 -- # uname 00:05:12.655 04:47:42 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:05:12.655 04:47:42 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:05:12.655 04:47:42 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:05:12.655 04:47:42 -- spdk/autotest.sh@83 -- # hash lcov 00:05:12.655 04:47:42 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:12.655 04:47:42 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:05:12.655 --rc lcov_branch_coverage=1 00:05:12.655 --rc lcov_function_coverage=1 00:05:12.655 --rc genhtml_branch_coverage=1 00:05:12.655 --rc genhtml_function_coverage=1 00:05:12.655 --rc genhtml_legend=1 00:05:12.655 --rc geninfo_all_blocks=1 00:05:12.655 ' 00:05:12.655 04:47:42 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:05:12.655 --rc lcov_branch_coverage=1 00:05:12.655 --rc lcov_function_coverage=1 00:05:12.655 --rc genhtml_branch_coverage=1 00:05:12.655 --rc genhtml_function_coverage=1 00:05:12.655 --rc genhtml_legend=1 00:05:12.655 --rc geninfo_all_blocks=1 00:05:12.655 ' 00:05:12.655 04:47:42 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:05:12.655 --rc lcov_branch_coverage=1 00:05:12.655 --rc lcov_function_coverage=1 00:05:12.655 --rc genhtml_branch_coverage=1 00:05:12.655 --rc genhtml_function_coverage=1 00:05:12.655 --rc genhtml_legend=1 00:05:12.655 --rc geninfo_all_blocks=1 00:05:12.655 --no-external' 00:05:12.655 04:47:42 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:05:12.655 --rc lcov_branch_coverage=1 00:05:12.655 --rc lcov_function_coverage=1 00:05:12.655 --rc genhtml_branch_coverage=1 00:05:12.655 --rc genhtml_function_coverage=1 00:05:12.655 --rc genhtml_legend=1 00:05:12.655 --rc geninfo_all_blocks=1 00:05:12.655 --no-external' 00:05:12.655 04:47:42 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:12.655 lcov: LCOV version 1.15 00:05:12.655 04:47:42 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:30.741 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:30.741 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:30.741 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:30.741 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:30.741 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:30.741 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:02.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:02.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:02.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:02.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:02.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:02.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:02.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:02.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:02.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:02.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:02.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:02.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:02.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:02.816 04:48:30 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:06:02.816 04:48:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:02.816 04:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:02.816 04:48:30 -- spdk/autotest.sh@102 -- # rm -f 00:06:02.816 04:48:30 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:02.816 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:06:02.816 04:48:30 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:06:02.816 04:48:30 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:02.816 04:48:30 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:02.816 04:48:30 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:02.816 04:48:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:02.816 04:48:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:02.816 04:48:30 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:02.816 04:48:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.816 04:48:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:02.816 04:48:30 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:06:02.816 04:48:30 -- spdk/autotest.sh@121 -- # grep -v p 00:06:02.816 04:48:30 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:06:02.816 04:48:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:06:02.816 04:48:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:06:02.816 04:48:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:06:02.816 04:48:30 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:06:02.816 04:48:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:02.816 No valid GPT data, bailing 00:06:02.816 04:48:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:02.816 04:48:30 -- scripts/common.sh@393 -- # pt= 00:06:02.816 04:48:30 -- scripts/common.sh@394 -- # return 1 00:06:02.816 04:48:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:02.816 1+0 records in 00:06:02.816 1+0 records out 00:06:02.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601361 s, 174 MB/s 00:06:02.816 04:48:30 -- spdk/autotest.sh@129 -- # sync 00:06:02.816 04:48:30 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:02.816 04:48:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:02.816 04:48:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:05.376 04:48:34 -- spdk/autotest.sh@135 -- # uname -s 00:06:05.376 04:48:34 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:06:05.376 04:48:34 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:05.376 04:48:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.376 04:48:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.376 04:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.376 ************************************ 00:06:05.376 START TEST setup.sh 00:06:05.376 ************************************ 00:06:05.376 04:48:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:05.376 * Looking for test storage... 00:06:05.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:05.376 04:48:34 -- setup/test-setup.sh@10 -- # uname -s 00:06:05.376 04:48:34 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:05.376 04:48:34 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:05.376 04:48:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.376 04:48:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.376 04:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.376 ************************************ 00:06:05.376 START TEST acl 00:06:05.376 ************************************ 00:06:05.376 04:48:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:05.376 * Looking for test storage... 00:06:05.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:05.376 04:48:35 -- setup/acl.sh@10 -- # get_zoned_devs 00:06:05.376 04:48:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:05.377 04:48:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:05.377 04:48:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:05.377 04:48:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:05.377 04:48:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:05.377 04:48:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:05.377 04:48:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:05.377 04:48:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:05.377 04:48:35 -- setup/acl.sh@12 -- # devs=() 00:06:05.377 04:48:35 -- setup/acl.sh@12 -- # declare -a devs 00:06:05.377 04:48:35 -- setup/acl.sh@13 -- # drivers=() 00:06:05.377 04:48:35 -- setup/acl.sh@13 -- # declare -A drivers 00:06:05.377 04:48:35 -- setup/acl.sh@51 -- # setup reset 00:06:05.377 04:48:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:05.377 04:48:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.635 04:48:35 -- setup/acl.sh@52 -- # collect_setup_devs 00:06:05.635 04:48:35 -- setup/acl.sh@16 -- # local dev driver 00:06:05.635 04:48:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:05.635 04:48:35 -- setup/acl.sh@15 -- # setup output status 00:06:05.635 04:48:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.635 04:48:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:05.895 Hugepages 00:06:05.895 node hugesize free / total 00:06:05.895 04:48:35 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:05.895 04:48:35 -- setup/acl.sh@19 -- # continue 00:06:05.895 04:48:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:05.895 00:06:05.895 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:05.895 04:48:35 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:05.895 04:48:35 -- setup/acl.sh@19 -- # continue 00:06:05.895 04:48:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:05.895 04:48:35 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:05.895 04:48:35 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:05.895 04:48:35 -- setup/acl.sh@20 -- # continue 00:06:05.895 04:48:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:06.153 04:48:35 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:06:06.153 04:48:35 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:06.153 04:48:35 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:06:06.153 04:48:35 -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:06.153 04:48:35 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:06.153 04:48:35 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:06.153 04:48:35 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:06.153 04:48:35 -- setup/acl.sh@54 -- # run_test denied denied 00:06:06.153 04:48:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.153 04:48:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.153 04:48:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.153 ************************************ 00:06:06.153 START TEST denied 00:06:06.153 ************************************ 00:06:06.153 04:48:35 -- common/autotest_common.sh@1104 -- # denied 00:06:06.153 04:48:35 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:06:06.153 04:48:35 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:06:06.153 04:48:35 -- setup/acl.sh@38 -- # setup output config 00:06:06.153 04:48:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.153 04:48:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:08.106 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:06:08.106 04:48:37 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:06:08.106 04:48:37 -- setup/acl.sh@28 -- # local dev driver 00:06:08.106 04:48:37 -- setup/acl.sh@30 -- # for dev in "$@" 00:06:08.106 04:48:37 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:06:08.106 04:48:37 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:06:08.106 04:48:37 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:08.106 04:48:37 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:08.106 04:48:37 -- setup/acl.sh@41 -- # setup reset 00:06:08.106 04:48:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:08.106 04:48:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.673 ************************************ 00:06:08.673 END TEST denied 00:06:08.673 ************************************ 00:06:08.673 00:06:08.673 real 0m2.520s 00:06:08.673 user 0m0.479s 00:06:08.673 sys 0m2.087s 00:06:08.673 04:48:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.673 04:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.673 04:48:38 -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:08.673 04:48:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.673 04:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.673 04:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.673 ************************************ 00:06:08.673 START TEST allowed 00:06:08.673 ************************************ 00:06:08.673 04:48:38 -- common/autotest_common.sh@1104 -- # allowed 00:06:08.673 04:48:38 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:06:08.673 04:48:38 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:06:08.673 04:48:38 -- setup/acl.sh@45 -- # setup output config 00:06:08.673 04:48:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.673 04:48:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:11.208 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:11.208 04:48:40 -- setup/acl.sh@47 -- # verify 00:06:11.208 04:48:40 -- setup/acl.sh@28 -- # local dev driver 00:06:11.208 04:48:40 -- setup/acl.sh@48 -- # setup reset 00:06:11.208 04:48:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:11.208 04:48:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.208 ************************************ 00:06:11.208 END TEST allowed 00:06:11.208 ************************************ 00:06:11.208 00:06:11.208 real 0m2.519s 00:06:11.208 user 0m0.436s 00:06:11.208 sys 0m2.092s 00:06:11.208 04:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.208 04:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.208 ************************************ 00:06:11.208 END TEST acl 00:06:11.208 ************************************ 00:06:11.208 00:06:11.208 real 0m6.035s 00:06:11.208 user 0m1.543s 00:06:11.208 sys 0m4.602s 00:06:11.208 04:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.208 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.208 04:48:41 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:11.208 04:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.208 04:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.208 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.208 ************************************ 00:06:11.208 START TEST hugepages 00:06:11.208 ************************************ 00:06:11.208 04:48:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:11.468 * Looking for test storage... 00:06:11.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:11.468 04:48:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:11.468 04:48:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:11.468 04:48:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:11.468 04:48:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:11.468 04:48:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:11.468 04:48:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:11.468 04:48:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:11.468 04:48:41 -- setup/common.sh@18 -- # local node= 00:06:11.468 04:48:41 -- setup/common.sh@19 -- # local var val 00:06:11.468 04:48:41 -- setup/common.sh@20 -- # local mem_f mem 00:06:11.468 04:48:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.468 04:48:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.468 04:48:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.468 04:48:41 -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.468 04:48:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 1484552 kB' 'MemAvailable: 7374228 kB' 'Buffers: 42352 kB' 'Cached: 5929288 kB' 'SwapCached: 0 kB' 'Active: 1649464 kB' 'Inactive: 4444000 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 132412 kB' 'Active(file): 1648388 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 432 kB' 'Writeback: 12 kB' 'AnonPages: 151060 kB' 'Mapped: 68428 kB' 'Shmem: 2600 kB' 'KReclaimable: 251004 kB' 'Slab: 328468 kB' 'SReclaimable: 251004 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 14328 kB' 'PageTables: 4264 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 511620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.468 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.468 04:48:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # continue 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # IFS=': ' 00:06:11.469 04:48:41 -- setup/common.sh@31 -- # read -r var val _ 00:06:11.469 04:48:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:11.469 04:48:41 -- setup/common.sh@33 -- # echo 2048 00:06:11.469 04:48:41 -- setup/common.sh@33 -- # return 0 00:06:11.469 04:48:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:11.469 04:48:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:11.469 04:48:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:11.469 04:48:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:11.469 04:48:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:11.469 04:48:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:11.469 04:48:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:11.469 04:48:41 -- setup/hugepages.sh@207 -- # get_nodes 00:06:11.469 04:48:41 -- setup/hugepages.sh@27 -- # local node 00:06:11.469 04:48:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:11.469 04:48:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:11.469 04:48:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:11.469 04:48:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:11.469 04:48:41 -- setup/hugepages.sh@208 -- # clear_hp 00:06:11.469 04:48:41 -- setup/hugepages.sh@37 -- # local node hp 00:06:11.469 04:48:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:11.469 04:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:11.469 04:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:06:11.469 04:48:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:11.470 04:48:41 -- setup/hugepages.sh@41 -- # echo 0 00:06:11.470 04:48:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:11.470 04:48:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:11.470 04:48:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:11.470 04:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.470 04:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.470 04:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.470 ************************************ 00:06:11.470 START TEST default_setup 00:06:11.470 ************************************ 00:06:11.470 04:48:41 -- common/autotest_common.sh@1104 -- # default_setup 00:06:11.470 04:48:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:11.470 04:48:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:11.470 04:48:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:11.470 04:48:41 -- setup/hugepages.sh@51 -- # shift 00:06:11.470 04:48:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:11.470 04:48:41 -- setup/hugepages.sh@52 -- # local node_ids 00:06:11.470 04:48:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:11.470 04:48:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:11.470 04:48:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:11.470 04:48:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:11.470 04:48:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:11.470 04:48:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:11.470 04:48:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:11.470 04:48:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:11.470 04:48:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:11.470 04:48:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:11.470 04:48:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:11.470 04:48:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:11.470 04:48:41 -- setup/hugepages.sh@73 -- # return 0 00:06:11.470 04:48:41 -- setup/hugepages.sh@137 -- # setup output 00:06:11.470 04:48:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.470 04:48:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:11.987 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.929 04:48:42 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:12.929 04:48:42 -- setup/hugepages.sh@89 -- # local node 00:06:12.929 04:48:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:12.929 04:48:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:12.929 04:48:42 -- setup/hugepages.sh@92 -- # local surp 00:06:12.929 04:48:42 -- setup/hugepages.sh@93 -- # local resv 00:06:12.929 04:48:42 -- setup/hugepages.sh@94 -- # local anon 00:06:12.929 04:48:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:12.929 04:48:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:12.929 04:48:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:12.929 04:48:42 -- setup/common.sh@18 -- # local node= 00:06:12.929 04:48:42 -- setup/common.sh@19 -- # local var val 00:06:12.929 04:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:12.929 04:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.929 04:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:12.929 04:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:12.929 04:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.929 04:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.929 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.929 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3565928 kB' 'MemAvailable: 9455800 kB' 'Buffers: 42352 kB' 'Cached: 5929324 kB' 'SwapCached: 0 kB' 'Active: 1649500 kB' 'Inactive: 4457612 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 146024 kB' 'Active(file): 1648428 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164936 kB' 'Mapped: 69208 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328516 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 14308 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 524696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.930 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.930 04:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:12.931 04:48:42 -- setup/common.sh@33 -- # echo 0 00:06:12.931 04:48:42 -- setup/common.sh@33 -- # return 0 00:06:12.931 04:48:42 -- setup/hugepages.sh@97 -- # anon=0 00:06:12.931 04:48:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:12.931 04:48:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:12.931 04:48:42 -- setup/common.sh@18 -- # local node= 00:06:12.931 04:48:42 -- setup/common.sh@19 -- # local var val 00:06:12.931 04:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:12.931 04:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.931 04:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:12.931 04:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:12.931 04:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.931 04:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3565928 kB' 'MemAvailable: 9455800 kB' 'Buffers: 42352 kB' 'Cached: 5929324 kB' 'SwapCached: 0 kB' 'Active: 1649500 kB' 'Inactive: 4457412 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 145824 kB' 'Active(file): 1648428 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164476 kB' 'Mapped: 68428 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328516 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 14304 kB' 'PageTables: 3912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.931 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.931 04:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.932 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.932 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.933 04:48:42 -- setup/common.sh@33 -- # echo 0 00:06:12.933 04:48:42 -- setup/common.sh@33 -- # return 0 00:06:12.933 04:48:42 -- setup/hugepages.sh@99 -- # surp=0 00:06:12.933 04:48:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:12.933 04:48:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:12.933 04:48:42 -- setup/common.sh@18 -- # local node= 00:06:12.933 04:48:42 -- setup/common.sh@19 -- # local var val 00:06:12.933 04:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:12.933 04:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.933 04:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:12.933 04:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:12.933 04:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.933 04:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3566716 kB' 'MemAvailable: 9456588 kB' 'Buffers: 42352 kB' 'Cached: 5929324 kB' 'SwapCached: 0 kB' 'Active: 1649508 kB' 'Inactive: 4457016 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145428 kB' 'Active(file): 1648428 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164032 kB' 'Mapped: 68368 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328516 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 14272 kB' 'PageTables: 3808 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.933 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.933 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.934 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.934 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.935 04:48:42 -- setup/common.sh@33 -- # echo 0 00:06:12.935 04:48:42 -- setup/common.sh@33 -- # return 0 00:06:12.935 04:48:42 -- setup/hugepages.sh@100 -- # resv=0 00:06:12.935 nr_hugepages=1024 00:06:12.935 04:48:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:12.935 resv_hugepages=0 00:06:12.935 04:48:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:12.935 surplus_hugepages=0 00:06:12.935 04:48:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:12.935 anon_hugepages=0 00:06:12.935 04:48:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:12.935 04:48:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:12.935 04:48:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:12.935 04:48:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:12.935 04:48:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:12.935 04:48:42 -- setup/common.sh@18 -- # local node= 00:06:12.935 04:48:42 -- setup/common.sh@19 -- # local var val 00:06:12.935 04:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:12.935 04:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.935 04:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:12.935 04:48:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:12.935 04:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.935 04:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3566716 kB' 'MemAvailable: 9456588 kB' 'Buffers: 42352 kB' 'Cached: 5929324 kB' 'SwapCached: 0 kB' 'Active: 1649508 kB' 'Inactive: 4457172 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145584 kB' 'Active(file): 1648428 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164184 kB' 'Mapped: 68368 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328516 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 14292 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.935 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.935 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.936 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.936 04:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.937 04:48:42 -- setup/common.sh@33 -- # echo 1024 00:06:12.937 04:48:42 -- setup/common.sh@33 -- # return 0 00:06:12.937 04:48:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:12.937 04:48:42 -- setup/hugepages.sh@112 -- # get_nodes 00:06:12.937 04:48:42 -- setup/hugepages.sh@27 -- # local node 00:06:12.937 04:48:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:12.937 04:48:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:12.937 04:48:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:12.937 04:48:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:12.937 04:48:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:12.937 04:48:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:12.937 04:48:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:12.937 04:48:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:12.937 04:48:42 -- setup/common.sh@18 -- # local node=0 00:06:12.937 04:48:42 -- setup/common.sh@19 -- # local var val 00:06:12.937 04:48:42 -- setup/common.sh@20 -- # local mem_f mem 00:06:12.937 04:48:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.937 04:48:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:12.937 04:48:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:12.937 04:48:42 -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.937 04:48:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3566936 kB' 'MemUsed: 8676040 kB' 'SwapCached: 0 kB' 'Active: 1649500 kB' 'Inactive: 4457204 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 145616 kB' 'Active(file): 1648428 kB' 'Inactive(file): 4311588 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'FilePages: 5971676 kB' 'Mapped: 68368 kB' 'AnonPages: 164196 kB' 'Shmem: 2596 kB' 'KernelStack: 14228 kB' 'PageTables: 3800 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251160 kB' 'Slab: 328516 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.937 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.937 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.938 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.938 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.939 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.939 04:48:42 -- setup/common.sh@32 -- # continue 00:06:12.939 04:48:42 -- setup/common.sh@31 -- # IFS=': ' 00:06:12.939 04:48:42 -- setup/common.sh@31 -- # read -r var val _ 00:06:12.939 04:48:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.939 04:48:42 -- setup/common.sh@33 -- # echo 0 00:06:12.939 04:48:42 -- setup/common.sh@33 -- # return 0 00:06:12.939 04:48:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:12.939 04:48:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:12.939 04:48:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:12.939 04:48:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:12.939 node0=1024 expecting 1024 00:06:12.939 04:48:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:12.939 04:48:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:12.939 00:06:12.939 real 0m1.429s 00:06:12.939 user 0m0.378s 00:06:12.939 sys 0m1.059s 00:06:12.939 04:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.939 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.939 ************************************ 00:06:12.939 END TEST default_setup 00:06:12.939 ************************************ 00:06:12.939 04:48:42 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:12.939 04:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.939 04:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.939 04:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.939 ************************************ 00:06:12.939 START TEST per_node_1G_alloc 00:06:12.939 ************************************ 00:06:12.939 04:48:42 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:06:12.939 04:48:42 -- setup/hugepages.sh@143 -- # local IFS=, 00:06:12.939 04:48:42 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:12.939 04:48:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:12.939 04:48:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:12.939 04:48:42 -- setup/hugepages.sh@51 -- # shift 00:06:12.939 04:48:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:12.939 04:48:42 -- setup/hugepages.sh@52 -- # local node_ids 00:06:12.939 04:48:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:12.939 04:48:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:12.939 04:48:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:12.939 04:48:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:12.939 04:48:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:12.939 04:48:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:12.939 04:48:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:12.939 04:48:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:12.939 04:48:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:12.939 04:48:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:12.939 04:48:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:12.939 04:48:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:12.939 04:48:42 -- setup/hugepages.sh@73 -- # return 0 00:06:12.939 04:48:42 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:12.939 04:48:42 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:12.939 04:48:42 -- setup/hugepages.sh@146 -- # setup output 00:06:12.939 04:48:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.939 04:48:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:13.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:13.225 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:13.795 04:48:43 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:13.795 04:48:43 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:13.795 04:48:43 -- setup/hugepages.sh@89 -- # local node 00:06:13.795 04:48:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:13.795 04:48:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:13.795 04:48:43 -- setup/hugepages.sh@92 -- # local surp 00:06:13.795 04:48:43 -- setup/hugepages.sh@93 -- # local resv 00:06:13.795 04:48:43 -- setup/hugepages.sh@94 -- # local anon 00:06:13.795 04:48:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:13.795 04:48:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:13.795 04:48:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:13.795 04:48:43 -- setup/common.sh@18 -- # local node= 00:06:13.795 04:48:43 -- setup/common.sh@19 -- # local var val 00:06:13.795 04:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.795 04:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.795 04:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.795 04:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.795 04:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.795 04:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.795 04:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4613372 kB' 'MemAvailable: 10503244 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649536 kB' 'Inactive: 4457532 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145972 kB' 'Active(file): 1648456 kB' 'Inactive(file): 4311560 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164620 kB' 'Mapped: 68380 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328428 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77268 kB' 'KernelStack: 14340 kB' 'PageTables: 4032 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.795 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.795 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.796 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.796 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.797 04:48:43 -- setup/common.sh@33 -- # echo 0 00:06:13.797 04:48:43 -- setup/common.sh@33 -- # return 0 00:06:13.797 04:48:43 -- setup/hugepages.sh@97 -- # anon=0 00:06:13.797 04:48:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:13.797 04:48:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.797 04:48:43 -- setup/common.sh@18 -- # local node= 00:06:13.797 04:48:43 -- setup/common.sh@19 -- # local var val 00:06:13.797 04:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.797 04:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.797 04:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.797 04:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.797 04:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.797 04:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4613372 kB' 'MemAvailable: 10503244 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649536 kB' 'Inactive: 4457272 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145712 kB' 'Active(file): 1648456 kB' 'Inactive(file): 4311560 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164360 kB' 'Mapped: 68380 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328428 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77268 kB' 'KernelStack: 14340 kB' 'PageTables: 4032 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.797 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.797 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.798 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.798 04:48:43 -- setup/common.sh@33 -- # echo 0 00:06:13.798 04:48:43 -- setup/common.sh@33 -- # return 0 00:06:13.798 04:48:43 -- setup/hugepages.sh@99 -- # surp=0 00:06:13.798 04:48:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:13.798 04:48:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:13.798 04:48:43 -- setup/common.sh@18 -- # local node= 00:06:13.798 04:48:43 -- setup/common.sh@19 -- # local var val 00:06:13.798 04:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.798 04:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.798 04:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.798 04:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.798 04:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.798 04:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.798 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4613372 kB' 'MemAvailable: 10503244 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649548 kB' 'Inactive: 4457224 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 145668 kB' 'Active(file): 1648460 kB' 'Inactive(file): 4311556 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164392 kB' 'Mapped: 68380 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328428 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77268 kB' 'KernelStack: 14272 kB' 'PageTables: 4088 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 521704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.799 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.799 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.800 04:48:43 -- setup/common.sh@33 -- # echo 0 00:06:13.800 04:48:43 -- setup/common.sh@33 -- # return 0 00:06:13.800 04:48:43 -- setup/hugepages.sh@100 -- # resv=0 00:06:13.800 nr_hugepages=512 00:06:13.800 04:48:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:13.800 resv_hugepages=0 00:06:13.800 04:48:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:13.800 surplus_hugepages=0 00:06:13.800 04:48:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:13.800 anon_hugepages=0 00:06:13.800 04:48:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:13.800 04:48:43 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:13.800 04:48:43 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:13.800 04:48:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:13.800 04:48:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:13.800 04:48:43 -- setup/common.sh@18 -- # local node= 00:06:13.800 04:48:43 -- setup/common.sh@19 -- # local var val 00:06:13.800 04:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.800 04:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.800 04:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.800 04:48:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.800 04:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.800 04:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4613372 kB' 'MemAvailable: 10503244 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649548 kB' 'Inactive: 4457320 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 145764 kB' 'Active(file): 1648460 kB' 'Inactive(file): 4311556 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164476 kB' 'Mapped: 68380 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328436 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 14372 kB' 'PageTables: 4168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.800 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.800 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.801 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.801 04:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.802 04:48:43 -- setup/common.sh@33 -- # echo 512 00:06:13.802 04:48:43 -- setup/common.sh@33 -- # return 0 00:06:13.802 04:48:43 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:13.802 04:48:43 -- setup/hugepages.sh@112 -- # get_nodes 00:06:13.802 04:48:43 -- setup/hugepages.sh@27 -- # local node 00:06:13.802 04:48:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:13.802 04:48:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:13.802 04:48:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:13.802 04:48:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:13.802 04:48:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:13.802 04:48:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:13.802 04:48:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:13.802 04:48:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.802 04:48:43 -- setup/common.sh@18 -- # local node=0 00:06:13.802 04:48:43 -- setup/common.sh@19 -- # local var val 00:06:13.802 04:48:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.802 04:48:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.802 04:48:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:13.802 04:48:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:13.802 04:48:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.802 04:48:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4613372 kB' 'MemUsed: 7629604 kB' 'SwapCached: 0 kB' 'Active: 1649548 kB' 'Inactive: 4457580 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 146024 kB' 'Active(file): 1648460 kB' 'Inactive(file): 4311556 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'FilePages: 5971680 kB' 'Mapped: 68380 kB' 'AnonPages: 164996 kB' 'Shmem: 2596 kB' 'KernelStack: 14372 kB' 'PageTables: 4168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251160 kB' 'Slab: 328436 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.802 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.802 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # continue 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.803 04:48:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.803 04:48:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.803 04:48:43 -- setup/common.sh@33 -- # echo 0 00:06:13.803 04:48:43 -- setup/common.sh@33 -- # return 0 00:06:13.803 04:48:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:13.803 04:48:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:13.803 04:48:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:13.803 04:48:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:13.803 node0=512 expecting 512 00:06:13.803 04:48:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:13.803 04:48:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:13.803 00:06:13.803 real 0m0.885s 00:06:13.803 user 0m0.328s 00:06:13.803 sys 0m0.597s 00:06:13.803 04:48:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.803 04:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.803 ************************************ 00:06:13.803 END TEST per_node_1G_alloc 00:06:13.803 ************************************ 00:06:13.803 04:48:43 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:13.803 04:48:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.803 04:48:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.803 04:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.803 ************************************ 00:06:13.804 START TEST even_2G_alloc 00:06:13.804 ************************************ 00:06:13.804 04:48:43 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:06:13.804 04:48:43 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:13.804 04:48:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:13.804 04:48:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:13.804 04:48:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:13.804 04:48:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:13.804 04:48:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:13.804 04:48:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:13.804 04:48:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:13.804 04:48:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:13.804 04:48:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:13.804 04:48:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:13.804 04:48:43 -- setup/hugepages.sh@83 -- # : 0 00:06:13.804 04:48:43 -- setup/hugepages.sh@84 -- # : 0 00:06:13.804 04:48:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:13.804 04:48:43 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:13.804 04:48:43 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:13.804 04:48:43 -- setup/hugepages.sh@153 -- # setup output 00:06:13.804 04:48:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.804 04:48:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:14.063 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:15.000 04:48:44 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:15.000 04:48:44 -- setup/hugepages.sh@89 -- # local node 00:06:15.000 04:48:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:15.000 04:48:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:15.000 04:48:44 -- setup/hugepages.sh@92 -- # local surp 00:06:15.000 04:48:44 -- setup/hugepages.sh@93 -- # local resv 00:06:15.000 04:48:44 -- setup/hugepages.sh@94 -- # local anon 00:06:15.000 04:48:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:15.000 04:48:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:15.000 04:48:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:15.000 04:48:44 -- setup/common.sh@18 -- # local node= 00:06:15.000 04:48:44 -- setup/common.sh@19 -- # local var val 00:06:15.000 04:48:44 -- setup/common.sh@20 -- # local mem_f mem 00:06:15.000 04:48:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.000 04:48:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.000 04:48:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.000 04:48:44 -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.000 04:48:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.000 04:48:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3562452 kB' 'MemAvailable: 9452328 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649548 kB' 'Inactive: 4457080 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145528 kB' 'Active(file): 1648468 kB' 'Inactive(file): 4311552 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164192 kB' 'Mapped: 68408 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328208 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77048 kB' 'KernelStack: 14208 kB' 'PageTables: 3656 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.000 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.000 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.001 04:48:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:15.001 04:48:44 -- setup/common.sh@33 -- # echo 0 00:06:15.001 04:48:44 -- setup/common.sh@33 -- # return 0 00:06:15.001 04:48:44 -- setup/hugepages.sh@97 -- # anon=0 00:06:15.001 04:48:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:15.001 04:48:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:15.001 04:48:44 -- setup/common.sh@18 -- # local node= 00:06:15.001 04:48:44 -- setup/common.sh@19 -- # local var val 00:06:15.001 04:48:44 -- setup/common.sh@20 -- # local mem_f mem 00:06:15.001 04:48:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.001 04:48:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.001 04:48:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.001 04:48:44 -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.001 04:48:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.001 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3562452 kB' 'MemAvailable: 9452328 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649548 kB' 'Inactive: 4457016 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 145464 kB' 'Active(file): 1648468 kB' 'Inactive(file): 4311552 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164124 kB' 'Mapped: 68408 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328208 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77048 kB' 'KernelStack: 14176 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.002 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.002 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.003 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.003 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.003 04:48:44 -- setup/common.sh@33 -- # echo 0 00:06:15.003 04:48:44 -- setup/common.sh@33 -- # return 0 00:06:15.003 04:48:44 -- setup/hugepages.sh@99 -- # surp=0 00:06:15.003 04:48:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:15.003 04:48:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:15.003 04:48:44 -- setup/common.sh@18 -- # local node= 00:06:15.003 04:48:44 -- setup/common.sh@19 -- # local var val 00:06:15.003 04:48:44 -- setup/common.sh@20 -- # local mem_f mem 00:06:15.003 04:48:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.003 04:48:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.003 04:48:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.003 04:48:44 -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.003 04:48:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3562452 kB' 'MemAvailable: 9452328 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649540 kB' 'Inactive: 4456868 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 145316 kB' 'Active(file): 1648468 kB' 'Inactive(file): 4311552 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 163960 kB' 'Mapped: 68408 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328208 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77048 kB' 'KernelStack: 14176 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.264 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.264 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:15.265 04:48:44 -- setup/common.sh@33 -- # echo 0 00:06:15.265 04:48:44 -- setup/common.sh@33 -- # return 0 00:06:15.265 04:48:44 -- setup/hugepages.sh@100 -- # resv=0 00:06:15.265 nr_hugepages=1024 00:06:15.265 04:48:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:15.265 resv_hugepages=0 00:06:15.265 04:48:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:15.265 surplus_hugepages=0 00:06:15.265 04:48:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:15.265 anon_hugepages=0 00:06:15.265 04:48:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:15.265 04:48:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:15.265 04:48:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:15.265 04:48:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:15.265 04:48:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:15.265 04:48:44 -- setup/common.sh@18 -- # local node= 00:06:15.265 04:48:44 -- setup/common.sh@19 -- # local var val 00:06:15.265 04:48:44 -- setup/common.sh@20 -- # local mem_f mem 00:06:15.265 04:48:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.265 04:48:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:15.265 04:48:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:15.265 04:48:44 -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.265 04:48:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3562452 kB' 'MemAvailable: 9452328 kB' 'Buffers: 42352 kB' 'Cached: 5929328 kB' 'SwapCached: 0 kB' 'Active: 1649540 kB' 'Inactive: 4456608 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 145056 kB' 'Active(file): 1648468 kB' 'Inactive(file): 4311552 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 163700 kB' 'Mapped: 68408 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328208 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77048 kB' 'KernelStack: 14244 kB' 'PageTables: 3828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.265 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.265 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.266 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.266 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:15.266 04:48:44 -- setup/common.sh@33 -- # echo 1024 00:06:15.266 04:48:44 -- setup/common.sh@33 -- # return 0 00:06:15.266 04:48:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:15.266 04:48:44 -- setup/hugepages.sh@112 -- # get_nodes 00:06:15.266 04:48:44 -- setup/hugepages.sh@27 -- # local node 00:06:15.266 04:48:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:15.267 04:48:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:15.267 04:48:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:15.267 04:48:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:15.267 04:48:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:15.267 04:48:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:15.267 04:48:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:15.267 04:48:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:15.267 04:48:44 -- setup/common.sh@18 -- # local node=0 00:06:15.267 04:48:44 -- setup/common.sh@19 -- # local var val 00:06:15.267 04:48:44 -- setup/common.sh@20 -- # local mem_f mem 00:06:15.267 04:48:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:15.267 04:48:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:15.267 04:48:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:15.267 04:48:44 -- setup/common.sh@28 -- # mapfile -t mem 00:06:15.267 04:48:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3562452 kB' 'MemUsed: 8680524 kB' 'SwapCached: 0 kB' 'Active: 1649540 kB' 'Inactive: 4456868 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 145316 kB' 'Active(file): 1648468 kB' 'Inactive(file): 4311552 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'FilePages: 5971680 kB' 'Mapped: 68408 kB' 'AnonPages: 163960 kB' 'Shmem: 2596 kB' 'KernelStack: 14312 kB' 'PageTables: 3828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251160 kB' 'Slab: 328208 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.267 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.267 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.268 04:48:44 -- setup/common.sh@32 -- # continue 00:06:15.268 04:48:44 -- setup/common.sh@31 -- # IFS=': ' 00:06:15.268 04:48:44 -- setup/common.sh@31 -- # read -r var val _ 00:06:15.268 04:48:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:15.268 04:48:44 -- setup/common.sh@33 -- # echo 0 00:06:15.268 04:48:44 -- setup/common.sh@33 -- # return 0 00:06:15.268 04:48:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:15.268 04:48:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:15.268 04:48:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:15.268 04:48:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:15.268 node0=1024 expecting 1024 00:06:15.268 04:48:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:15.268 04:48:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:15.268 00:06:15.268 real 0m1.354s 00:06:15.268 user 0m0.307s 00:06:15.268 sys 0m1.087s 00:06:15.268 04:48:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.268 04:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.268 ************************************ 00:06:15.268 END TEST even_2G_alloc 00:06:15.268 ************************************ 00:06:15.268 04:48:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:15.268 04:48:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.268 04:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.268 04:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.268 ************************************ 00:06:15.268 START TEST odd_alloc 00:06:15.268 ************************************ 00:06:15.268 04:48:45 -- common/autotest_common.sh@1104 -- # odd_alloc 00:06:15.268 04:48:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:15.268 04:48:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:15.268 04:48:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:15.268 04:48:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:15.268 04:48:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:15.268 04:48:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:15.268 04:48:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:15.268 04:48:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:15.268 04:48:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:15.268 04:48:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:15.268 04:48:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:15.268 04:48:45 -- setup/hugepages.sh@83 -- # : 0 00:06:15.268 04:48:45 -- setup/hugepages.sh@84 -- # : 0 00:06:15.268 04:48:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:15.268 04:48:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:15.268 04:48:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:15.268 04:48:45 -- setup/hugepages.sh@160 -- # setup output 00:06:15.268 04:48:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:15.268 04:48:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:15.526 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:16.464 04:48:46 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:16.464 04:48:46 -- setup/hugepages.sh@89 -- # local node 00:06:16.464 04:48:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:16.464 04:48:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:16.464 04:48:46 -- setup/hugepages.sh@92 -- # local surp 00:06:16.464 04:48:46 -- setup/hugepages.sh@93 -- # local resv 00:06:16.464 04:48:46 -- setup/hugepages.sh@94 -- # local anon 00:06:16.464 04:48:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:16.464 04:48:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:16.464 04:48:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:16.464 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:16.464 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:16.464 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:16.465 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.465 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.465 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.465 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.465 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3560340 kB' 'MemAvailable: 9450216 kB' 'Buffers: 42360 kB' 'Cached: 5929320 kB' 'SwapCached: 0 kB' 'Active: 1649552 kB' 'Inactive: 4453044 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141504 kB' 'Active(file): 1648480 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 432 kB' 'Writeback: 0 kB' 'AnonPages: 160148 kB' 'Mapped: 67552 kB' 'Shmem: 2596 kB' 'KReclaimable: 251160 kB' 'Slab: 328560 kB' 'SReclaimable: 251160 kB' 'SUnreclaim: 77400 kB' 'KernelStack: 14096 kB' 'PageTables: 3324 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29316 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.465 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.465 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:16.466 04:48:46 -- setup/common.sh@33 -- # echo 0 00:06:16.466 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:16.466 04:48:46 -- setup/hugepages.sh@97 -- # anon=0 00:06:16.466 04:48:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:16.466 04:48:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:16.466 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:16.466 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:16.466 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:16.466 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.466 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.466 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.466 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.466 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3560864 kB' 'MemAvailable: 9450748 kB' 'Buffers: 42360 kB' 'Cached: 5929332 kB' 'SwapCached: 0 kB' 'Active: 1649560 kB' 'Inactive: 4453040 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141496 kB' 'Active(file): 1648488 kB' 'Inactive(file): 4311544 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 20 kB' 'AnonPages: 159852 kB' 'Mapped: 67548 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328556 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77400 kB' 'KernelStack: 14096 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29316 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.466 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.466 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.467 04:48:46 -- setup/common.sh@33 -- # echo 0 00:06:16.467 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:16.467 04:48:46 -- setup/hugepages.sh@99 -- # surp=0 00:06:16.467 04:48:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:16.467 04:48:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:16.467 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:16.467 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:16.467 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:16.467 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.467 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.467 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.467 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.467 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.467 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3561116 kB' 'MemAvailable: 9451000 kB' 'Buffers: 42360 kB' 'Cached: 5929332 kB' 'SwapCached: 0 kB' 'Active: 1649564 kB' 'Inactive: 4453192 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 141648 kB' 'Active(file): 1648488 kB' 'Inactive(file): 4311544 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 20 kB' 'AnonPages: 160004 kB' 'Mapped: 67548 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328492 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77336 kB' 'KernelStack: 14048 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29316 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.467 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.467 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.468 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:16.468 04:48:46 -- setup/common.sh@33 -- # echo 0 00:06:16.468 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:16.468 04:48:46 -- setup/hugepages.sh@100 -- # resv=0 00:06:16.468 nr_hugepages=1025 00:06:16.468 04:48:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:16.468 resv_hugepages=0 00:06:16.468 04:48:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:16.468 surplus_hugepages=0 00:06:16.468 04:48:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:16.468 anon_hugepages=0 00:06:16.468 04:48:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:16.468 04:48:46 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:16.468 04:48:46 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:16.468 04:48:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:16.468 04:48:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:16.468 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:16.468 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:16.468 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:16.468 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.468 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:16.468 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:16.468 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.468 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.468 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3561116 kB' 'MemAvailable: 9451000 kB' 'Buffers: 42360 kB' 'Cached: 5929332 kB' 'SwapCached: 0 kB' 'Active: 1649564 kB' 'Inactive: 4452932 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 141388 kB' 'Active(file): 1648488 kB' 'Inactive(file): 4311544 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 20 kB' 'AnonPages: 160004 kB' 'Mapped: 67548 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328492 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77336 kB' 'KernelStack: 14116 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29332 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.469 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.469 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:16.470 04:48:46 -- setup/common.sh@33 -- # echo 1025 00:06:16.470 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:16.470 04:48:46 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:16.470 04:48:46 -- setup/hugepages.sh@112 -- # get_nodes 00:06:16.470 04:48:46 -- setup/hugepages.sh@27 -- # local node 00:06:16.470 04:48:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:16.470 04:48:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:16.470 04:48:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:16.470 04:48:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:16.470 04:48:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:16.470 04:48:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:16.470 04:48:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:16.470 04:48:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:16.470 04:48:46 -- setup/common.sh@18 -- # local node=0 00:06:16.470 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:16.470 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:16.470 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:16.470 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:16.470 04:48:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:16.470 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:16.470 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3561116 kB' 'MemUsed: 8681860 kB' 'SwapCached: 0 kB' 'Active: 1649564 kB' 'Inactive: 4452888 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 141344 kB' 'Active(file): 1648488 kB' 'Inactive(file): 4311544 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 8 kB' 'Writeback: 20 kB' 'FilePages: 5971692 kB' 'Mapped: 67548 kB' 'AnonPages: 159960 kB' 'Shmem: 2596 kB' 'KernelStack: 14168 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251156 kB' 'Slab: 328492 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.470 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.470 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # continue 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:16.471 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:16.471 04:48:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:16.471 04:48:46 -- setup/common.sh@33 -- # echo 0 00:06:16.471 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:16.471 04:48:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:16.471 04:48:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:16.471 04:48:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:16.471 node0=1025 expecting 1025 00:06:16.471 04:48:46 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:16.471 04:48:46 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:16.471 00:06:16.471 real 0m1.185s 00:06:16.471 user 0m0.270s 00:06:16.471 sys 0m0.949s 00:06:16.471 04:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.471 04:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 ************************************ 00:06:16.471 END TEST odd_alloc 00:06:16.471 ************************************ 00:06:16.471 04:48:46 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:16.471 04:48:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.471 04:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.471 04:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 ************************************ 00:06:16.471 START TEST custom_alloc 00:06:16.471 ************************************ 00:06:16.471 04:48:46 -- common/autotest_common.sh@1104 -- # custom_alloc 00:06:16.471 04:48:46 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:16.471 04:48:46 -- setup/hugepages.sh@169 -- # local node 00:06:16.471 04:48:46 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:16.471 04:48:46 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:16.471 04:48:46 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:16.471 04:48:46 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:16.471 04:48:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:16.471 04:48:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:16.471 04:48:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:16.471 04:48:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:16.471 04:48:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:16.471 04:48:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:16.471 04:48:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:16.471 04:48:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@83 -- # : 0 00:06:16.471 04:48:46 -- setup/hugepages.sh@84 -- # : 0 00:06:16.471 04:48:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:16.471 04:48:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:16.471 04:48:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:16.471 04:48:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:16.471 04:48:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:16.471 04:48:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:16.471 04:48:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:16.471 04:48:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:16.471 04:48:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:16.471 04:48:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:16.471 04:48:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:16.471 04:48:46 -- setup/hugepages.sh@78 -- # return 0 00:06:16.471 04:48:46 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:16.471 04:48:46 -- setup/hugepages.sh@187 -- # setup output 00:06:16.471 04:48:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.471 04:48:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:16.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:16.729 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:17.298 04:48:46 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:17.298 04:48:46 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:17.298 04:48:46 -- setup/hugepages.sh@89 -- # local node 00:06:17.298 04:48:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:17.298 04:48:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:17.298 04:48:46 -- setup/hugepages.sh@92 -- # local surp 00:06:17.298 04:48:46 -- setup/hugepages.sh@93 -- # local resv 00:06:17.298 04:48:46 -- setup/hugepages.sh@94 -- # local anon 00:06:17.298 04:48:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:17.298 04:48:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:17.298 04:48:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:17.298 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:17.298 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:17.298 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:17.298 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.298 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.298 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.298 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.298 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.298 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.298 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4612596 kB' 'MemAvailable: 10502484 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649576 kB' 'Inactive: 4452944 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141404 kB' 'Active(file): 1648496 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160316 kB' 'Mapped: 67576 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328284 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77128 kB' 'KernelStack: 14112 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:17.298 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.298 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.298 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.298 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.298 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.298 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.299 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.299 04:48:46 -- setup/common.sh@33 -- # echo 0 00:06:17.299 04:48:46 -- setup/common.sh@33 -- # return 0 00:06:17.299 04:48:46 -- setup/hugepages.sh@97 -- # anon=0 00:06:17.299 04:48:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:17.299 04:48:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.299 04:48:46 -- setup/common.sh@18 -- # local node= 00:06:17.299 04:48:46 -- setup/common.sh@19 -- # local var val 00:06:17.299 04:48:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:17.299 04:48:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.299 04:48:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.299 04:48:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.299 04:48:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.299 04:48:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.299 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4612596 kB' 'MemAvailable: 10502484 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649576 kB' 'Inactive: 4452944 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141404 kB' 'Active(file): 1648496 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160056 kB' 'Mapped: 67576 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328284 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77128 kB' 'KernelStack: 14112 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.300 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.300 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:46 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.301 04:48:47 -- setup/common.sh@33 -- # echo 0 00:06:17.301 04:48:47 -- setup/common.sh@33 -- # return 0 00:06:17.301 04:48:47 -- setup/hugepages.sh@99 -- # surp=0 00:06:17.301 04:48:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:17.301 04:48:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:17.301 04:48:47 -- setup/common.sh@18 -- # local node= 00:06:17.301 04:48:47 -- setup/common.sh@19 -- # local var val 00:06:17.301 04:48:47 -- setup/common.sh@20 -- # local mem_f mem 00:06:17.301 04:48:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.301 04:48:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.301 04:48:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.301 04:48:47 -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.301 04:48:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4612860 kB' 'MemAvailable: 10502748 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649576 kB' 'Inactive: 4452944 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141404 kB' 'Active(file): 1648496 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160056 kB' 'Mapped: 67576 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328284 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77128 kB' 'KernelStack: 14112 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.301 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.301 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.302 04:48:47 -- setup/common.sh@33 -- # echo 0 00:06:17.302 04:48:47 -- setup/common.sh@33 -- # return 0 00:06:17.302 04:48:47 -- setup/hugepages.sh@100 -- # resv=0 00:06:17.302 nr_hugepages=512 00:06:17.302 04:48:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:17.302 resv_hugepages=0 00:06:17.302 04:48:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:17.302 surplus_hugepages=0 00:06:17.302 04:48:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:17.302 anon_hugepages=0 00:06:17.302 04:48:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:17.302 04:48:47 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:17.302 04:48:47 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:17.302 04:48:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:17.302 04:48:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:17.302 04:48:47 -- setup/common.sh@18 -- # local node= 00:06:17.302 04:48:47 -- setup/common.sh@19 -- # local var val 00:06:17.302 04:48:47 -- setup/common.sh@20 -- # local mem_f mem 00:06:17.302 04:48:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.302 04:48:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.302 04:48:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.302 04:48:47 -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.302 04:48:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4612860 kB' 'MemAvailable: 10502748 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649576 kB' 'Inactive: 4453204 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141664 kB' 'Active(file): 1648496 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160576 kB' 'Mapped: 67576 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328284 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77128 kB' 'KernelStack: 14180 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.302 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.302 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.303 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.303 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.303 04:48:47 -- setup/common.sh@33 -- # echo 512 00:06:17.303 04:48:47 -- setup/common.sh@33 -- # return 0 00:06:17.303 04:48:47 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:17.303 04:48:47 -- setup/hugepages.sh@112 -- # get_nodes 00:06:17.303 04:48:47 -- setup/hugepages.sh@27 -- # local node 00:06:17.303 04:48:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.303 04:48:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:17.303 04:48:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:17.303 04:48:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:17.304 04:48:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:17.304 04:48:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:17.304 04:48:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:17.304 04:48:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.304 04:48:47 -- setup/common.sh@18 -- # local node=0 00:06:17.304 04:48:47 -- setup/common.sh@19 -- # local var val 00:06:17.304 04:48:47 -- setup/common.sh@20 -- # local mem_f mem 00:06:17.304 04:48:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.304 04:48:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:17.304 04:48:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:17.304 04:48:47 -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.304 04:48:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4612872 kB' 'MemUsed: 7630104 kB' 'SwapCached: 0 kB' 'Active: 1649568 kB' 'Inactive: 4453072 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141532 kB' 'Active(file): 1648496 kB' 'Inactive(file): 4311540 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 5971696 kB' 'Mapped: 67564 kB' 'AnonPages: 160428 kB' 'Shmem: 2596 kB' 'KernelStack: 14232 kB' 'PageTables: 3556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251156 kB' 'Slab: 328284 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.304 04:48:47 -- setup/common.sh@32 -- # continue 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # IFS=': ' 00:06:17.304 04:48:47 -- setup/common.sh@31 -- # read -r var val _ 00:06:17.305 04:48:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.305 04:48:47 -- setup/common.sh@33 -- # echo 0 00:06:17.305 04:48:47 -- setup/common.sh@33 -- # return 0 00:06:17.305 04:48:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:17.305 04:48:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:17.305 04:48:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:17.305 04:48:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:17.305 04:48:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:17.305 node0=512 expecting 512 00:06:17.305 04:48:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:17.305 00:06:17.305 real 0m0.829s 00:06:17.305 user 0m0.284s 00:06:17.305 ************************************ 00:06:17.305 END TEST custom_alloc 00:06:17.305 ************************************ 00:06:17.305 sys 0m0.582s 00:06:17.305 04:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.305 04:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.305 04:48:47 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:17.305 04:48:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.305 04:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.305 04:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.305 ************************************ 00:06:17.305 START TEST no_shrink_alloc 00:06:17.305 ************************************ 00:06:17.305 04:48:47 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:06:17.305 04:48:47 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:17.305 04:48:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:17.305 04:48:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:17.305 04:48:47 -- setup/hugepages.sh@51 -- # shift 00:06:17.305 04:48:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:17.305 04:48:47 -- setup/hugepages.sh@52 -- # local node_ids 00:06:17.305 04:48:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:17.305 04:48:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:17.305 04:48:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:17.305 04:48:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:17.305 04:48:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:17.305 04:48:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:17.305 04:48:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:17.305 04:48:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:17.305 04:48:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:17.305 04:48:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:17.305 04:48:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:17.305 04:48:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:17.305 04:48:47 -- setup/hugepages.sh@73 -- # return 0 00:06:17.305 04:48:47 -- setup/hugepages.sh@198 -- # setup output 00:06:17.305 04:48:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.305 04:48:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:17.563 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:18.941 04:48:48 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:18.941 04:48:48 -- setup/hugepages.sh@89 -- # local node 00:06:18.941 04:48:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:18.941 04:48:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:18.941 04:48:48 -- setup/hugepages.sh@92 -- # local surp 00:06:18.941 04:48:48 -- setup/hugepages.sh@93 -- # local resv 00:06:18.941 04:48:48 -- setup/hugepages.sh@94 -- # local anon 00:06:18.941 04:48:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:18.941 04:48:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:18.941 04:48:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:18.941 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:18.941 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:18.941 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:18.941 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.941 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.941 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.941 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.941 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.941 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3565712 kB' 'MemAvailable: 9455600 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649580 kB' 'Inactive: 4453272 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141736 kB' 'Active(file): 1648500 kB' 'Inactive(file): 4311536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160372 kB' 'Mapped: 67592 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328220 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77064 kB' 'KernelStack: 14128 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.941 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.941 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:18.942 04:48:48 -- setup/common.sh@33 -- # echo 0 00:06:18.942 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:18.942 04:48:48 -- setup/hugepages.sh@97 -- # anon=0 00:06:18.942 04:48:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:18.942 04:48:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.942 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:18.942 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:18.942 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:18.942 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.942 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.942 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.942 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.942 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3565712 kB' 'MemAvailable: 9455600 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649572 kB' 'Inactive: 4453104 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141568 kB' 'Active(file): 1648500 kB' 'Inactive(file): 4311536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160480 kB' 'Mapped: 67564 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328220 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77064 kB' 'KernelStack: 14160 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.942 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.942 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.943 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.943 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.943 04:48:48 -- setup/common.sh@33 -- # echo 0 00:06:18.943 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:18.943 04:48:48 -- setup/hugepages.sh@99 -- # surp=0 00:06:18.943 04:48:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:18.943 04:48:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:18.943 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:18.943 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:18.943 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:18.944 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.944 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.944 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.944 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.944 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3566216 kB' 'MemAvailable: 9456104 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649580 kB' 'Inactive: 4452960 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141424 kB' 'Active(file): 1648500 kB' 'Inactive(file): 4311536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160332 kB' 'Mapped: 67564 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328220 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77064 kB' 'KernelStack: 14128 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.944 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.944 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:18.945 04:48:48 -- setup/common.sh@33 -- # echo 0 00:06:18.945 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:18.945 nr_hugepages=1024 00:06:18.945 resv_hugepages=0 00:06:18.945 surplus_hugepages=0 00:06:18.945 anon_hugepages=0 00:06:18.945 04:48:48 -- setup/hugepages.sh@100 -- # resv=0 00:06:18.945 04:48:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:18.945 04:48:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:18.945 04:48:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:18.945 04:48:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:18.945 04:48:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:18.945 04:48:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:18.945 04:48:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:18.945 04:48:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:18.945 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:18.945 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:18.945 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:18.945 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.945 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:18.945 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:18.945 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.945 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3566216 kB' 'MemAvailable: 9456104 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649580 kB' 'Inactive: 4452912 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141376 kB' 'Active(file): 1648500 kB' 'Inactive(file): 4311536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160260 kB' 'Mapped: 67564 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328220 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77064 kB' 'KernelStack: 14096 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.945 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.945 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.946 04:48:48 -- setup/common.sh@33 -- # echo 1024 00:06:18.946 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:18.946 04:48:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:18.946 04:48:48 -- setup/hugepages.sh@112 -- # get_nodes 00:06:18.946 04:48:48 -- setup/hugepages.sh@27 -- # local node 00:06:18.946 04:48:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.946 04:48:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:18.946 04:48:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:18.946 04:48:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:18.946 04:48:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.946 04:48:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.946 04:48:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:18.946 04:48:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.946 04:48:48 -- setup/common.sh@18 -- # local node=0 00:06:18.946 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:18.946 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:18.946 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.946 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:18.946 04:48:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:18.946 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.946 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3565712 kB' 'MemUsed: 8677264 kB' 'SwapCached: 0 kB' 'Active: 1649580 kB' 'Inactive: 4452912 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 141376 kB' 'Active(file): 1648500 kB' 'Inactive(file): 4311536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 5971696 kB' 'Mapped: 67564 kB' 'AnonPages: 160000 kB' 'Shmem: 2596 kB' 'KernelStack: 14164 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251156 kB' 'Slab: 328220 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.946 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.946 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # continue 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:18.947 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:18.947 04:48:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.947 04:48:48 -- setup/common.sh@33 -- # echo 0 00:06:18.947 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:18.947 node0=1024 expecting 1024 00:06:18.947 04:48:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.947 04:48:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.947 04:48:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.947 04:48:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.947 04:48:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:18.947 04:48:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:18.947 04:48:48 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:18.947 04:48:48 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:18.947 04:48:48 -- setup/hugepages.sh@202 -- # setup output 00:06:18.947 04:48:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.947 04:48:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:19.208 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:19.208 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:19.208 04:48:48 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:19.208 04:48:48 -- setup/hugepages.sh@89 -- # local node 00:06:19.208 04:48:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:19.208 04:48:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:19.208 04:48:48 -- setup/hugepages.sh@92 -- # local surp 00:06:19.208 04:48:48 -- setup/hugepages.sh@93 -- # local resv 00:06:19.208 04:48:48 -- setup/hugepages.sh@94 -- # local anon 00:06:19.208 04:48:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:19.208 04:48:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:19.208 04:48:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:19.208 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:19.208 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:19.208 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:19.208 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.208 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.208 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.208 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.208 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3564012 kB' 'MemAvailable: 9453900 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649668 kB' 'Inactive: 4454024 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 142568 kB' 'Active(file): 1648580 kB' 'Inactive(file): 4311456 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 161248 kB' 'Mapped: 67812 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328172 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 77016 kB' 'KernelStack: 14244 kB' 'PageTables: 3740 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.208 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.208 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:19.209 04:48:48 -- setup/common.sh@33 -- # echo 0 00:06:19.209 04:48:48 -- setup/common.sh@33 -- # return 0 00:06:19.209 04:48:48 -- setup/hugepages.sh@97 -- # anon=0 00:06:19.209 04:48:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:19.209 04:48:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:19.209 04:48:48 -- setup/common.sh@18 -- # local node= 00:06:19.209 04:48:48 -- setup/common.sh@19 -- # local var val 00:06:19.209 04:48:48 -- setup/common.sh@20 -- # local mem_f mem 00:06:19.209 04:48:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.209 04:48:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.209 04:48:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.209 04:48:48 -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.209 04:48:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3563988 kB' 'MemAvailable: 9453876 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649652 kB' 'Inactive: 4453720 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 142264 kB' 'Active(file): 1648580 kB' 'Inactive(file): 4311456 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160892 kB' 'Mapped: 67840 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328060 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 76904 kB' 'KernelStack: 14204 kB' 'PageTables: 3656 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.209 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.209 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:48 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.210 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.210 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.211 04:48:49 -- setup/common.sh@33 -- # echo 0 00:06:19.211 04:48:49 -- setup/common.sh@33 -- # return 0 00:06:19.211 04:48:49 -- setup/hugepages.sh@99 -- # surp=0 00:06:19.211 04:48:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:19.211 04:48:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:19.211 04:48:49 -- setup/common.sh@18 -- # local node= 00:06:19.211 04:48:49 -- setup/common.sh@19 -- # local var val 00:06:19.211 04:48:49 -- setup/common.sh@20 -- # local mem_f mem 00:06:19.211 04:48:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.211 04:48:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.211 04:48:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.211 04:48:49 -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.211 04:48:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3564472 kB' 'MemAvailable: 9454360 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649652 kB' 'Inactive: 4452952 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141496 kB' 'Active(file): 1648580 kB' 'Inactive(file): 4311456 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160356 kB' 'Mapped: 67644 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328088 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 76932 kB' 'KernelStack: 14076 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.211 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.211 04:48:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.212 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.212 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:19.212 04:48:49 -- setup/common.sh@33 -- # echo 0 00:06:19.212 04:48:49 -- setup/common.sh@33 -- # return 0 00:06:19.212 04:48:49 -- setup/hugepages.sh@100 -- # resv=0 00:06:19.212 04:48:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:19.212 nr_hugepages=1024 00:06:19.212 04:48:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:19.212 resv_hugepages=0 00:06:19.212 04:48:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:19.212 surplus_hugepages=0 00:06:19.212 anon_hugepages=0 00:06:19.212 04:48:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:19.212 04:48:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:19.213 04:48:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:19.213 04:48:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:19.213 04:48:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:19.213 04:48:49 -- setup/common.sh@18 -- # local node= 00:06:19.213 04:48:49 -- setup/common.sh@19 -- # local var val 00:06:19.213 04:48:49 -- setup/common.sh@20 -- # local mem_f mem 00:06:19.213 04:48:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.213 04:48:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:19.213 04:48:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:19.213 04:48:49 -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.213 04:48:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.213 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3564724 kB' 'MemAvailable: 9454612 kB' 'Buffers: 42360 kB' 'Cached: 5929336 kB' 'SwapCached: 0 kB' 'Active: 1649652 kB' 'Inactive: 4452836 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141380 kB' 'Active(file): 1648580 kB' 'Inactive(file): 4311456 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 160292 kB' 'Mapped: 67564 kB' 'Shmem: 2596 kB' 'KReclaimable: 251156 kB' 'Slab: 328088 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 76932 kB' 'KernelStack: 14128 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 511380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 29412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.473 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.473 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:19.474 04:48:49 -- setup/common.sh@33 -- # echo 1024 00:06:19.474 04:48:49 -- setup/common.sh@33 -- # return 0 00:06:19.474 04:48:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:19.474 04:48:49 -- setup/hugepages.sh@112 -- # get_nodes 00:06:19.474 04:48:49 -- setup/hugepages.sh@27 -- # local node 00:06:19.474 04:48:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:19.474 04:48:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:19.474 04:48:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:19.474 04:48:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:19.474 04:48:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:19.474 04:48:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:19.474 04:48:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:19.474 04:48:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:19.474 04:48:49 -- setup/common.sh@18 -- # local node=0 00:06:19.474 04:48:49 -- setup/common.sh@19 -- # local var val 00:06:19.474 04:48:49 -- setup/common.sh@20 -- # local mem_f mem 00:06:19.474 04:48:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:19.474 04:48:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:19.474 04:48:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:19.474 04:48:49 -- setup/common.sh@28 -- # mapfile -t mem 00:06:19.474 04:48:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 3564724 kB' 'MemUsed: 8678252 kB' 'SwapCached: 0 kB' 'Active: 1649652 kB' 'Inactive: 4453044 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 141588 kB' 'Active(file): 1648580 kB' 'Inactive(file): 4311456 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 5971696 kB' 'Mapped: 67564 kB' 'AnonPages: 159952 kB' 'Shmem: 2596 kB' 'KernelStack: 14112 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251156 kB' 'Slab: 328088 kB' 'SReclaimable: 251156 kB' 'SUnreclaim: 76932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.474 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.474 04:48:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # continue 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # IFS=': ' 00:06:19.475 04:48:49 -- setup/common.sh@31 -- # read -r var val _ 00:06:19.475 04:48:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:19.475 04:48:49 -- setup/common.sh@33 -- # echo 0 00:06:19.475 04:48:49 -- setup/common.sh@33 -- # return 0 00:06:19.475 04:48:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:19.475 04:48:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:19.475 04:48:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:19.475 04:48:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:19.475 04:48:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:19.475 node0=1024 expecting 1024 00:06:19.475 04:48:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:19.475 00:06:19.475 real 0m2.043s 00:06:19.475 user 0m0.619s 00:06:19.475 sys 0m1.360s 00:06:19.475 04:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.475 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 ************************************ 00:06:19.475 END TEST no_shrink_alloc 00:06:19.475 ************************************ 00:06:19.475 04:48:49 -- setup/hugepages.sh@217 -- # clear_hp 00:06:19.475 04:48:49 -- setup/hugepages.sh@37 -- # local node hp 00:06:19.475 04:48:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:19.475 04:48:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:19.475 04:48:49 -- setup/hugepages.sh@41 -- # echo 0 00:06:19.475 04:48:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:19.475 04:48:49 -- setup/hugepages.sh@41 -- # echo 0 00:06:19.475 04:48:49 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:19.475 04:48:49 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:19.475 ************************************ 00:06:19.475 END TEST hugepages 00:06:19.475 ************************************ 00:06:19.475 00:06:19.475 real 0m8.170s 00:06:19.475 user 0m2.424s 00:06:19.475 sys 0m5.837s 00:06:19.475 04:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.475 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 04:48:49 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:19.476 04:48:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.476 04:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.476 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.476 ************************************ 00:06:19.476 START TEST driver 00:06:19.476 ************************************ 00:06:19.476 04:48:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:19.476 * Looking for test storage... 00:06:19.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:19.733 04:48:49 -- setup/driver.sh@68 -- # setup reset 00:06:19.733 04:48:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:19.733 04:48:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:19.991 04:48:49 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:19.992 04:48:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.992 04:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.992 04:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.992 ************************************ 00:06:19.992 START TEST guess_driver 00:06:19.992 ************************************ 00:06:19.992 04:48:49 -- common/autotest_common.sh@1104 -- # guess_driver 00:06:19.992 04:48:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:19.992 04:48:49 -- setup/driver.sh@47 -- # local fail=0 00:06:19.992 04:48:49 -- setup/driver.sh@49 -- # pick_driver 00:06:19.992 04:48:49 -- setup/driver.sh@36 -- # vfio 00:06:19.992 04:48:49 -- setup/driver.sh@21 -- # local iommu_grups 00:06:19.992 04:48:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:19.992 04:48:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:19.992 04:48:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:19.992 04:48:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:19.992 04:48:49 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:19.992 04:48:49 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:06:19.992 04:48:49 -- setup/driver.sh@32 -- # return 1 00:06:19.992 04:48:49 -- setup/driver.sh@38 -- # uio 00:06:19.992 04:48:49 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:06:19.992 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:06:19.992 04:48:49 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:19.992 Looking for driver=uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:19.992 04:48:49 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:19.992 04:48:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:19.992 04:48:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:19.992 04:48:49 -- setup/driver.sh@45 -- # setup output config 00:06:19.992 04:48:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.992 04:48:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:20.559 04:48:50 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:20.559 04:48:50 -- setup/driver.sh@58 -- # continue 00:06:20.559 04:48:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.559 04:48:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.559 04:48:50 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:20.559 04:48:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:22.460 04:48:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:22.460 04:48:52 -- setup/driver.sh@65 -- # setup reset 00:06:22.460 04:48:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:22.460 04:48:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:22.718 00:06:22.718 real 0m2.717s 00:06:22.718 user 0m0.449s 00:06:22.718 sys 0m2.292s 00:06:22.718 04:48:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.718 ************************************ 00:06:22.718 END TEST guess_driver 00:06:22.718 ************************************ 00:06:22.718 04:48:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.718 00:06:22.718 real 0m3.314s 00:06:22.718 user 0m0.713s 00:06:22.718 sys 0m2.630s 00:06:22.718 04:48:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.718 ************************************ 00:06:22.718 04:48:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.718 END TEST driver 00:06:22.718 ************************************ 00:06:22.977 04:48:52 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:22.977 04:48:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.977 04:48:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.977 04:48:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.977 ************************************ 00:06:22.977 START TEST devices 00:06:22.977 ************************************ 00:06:22.977 04:48:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:22.977 * Looking for test storage... 00:06:22.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:22.977 04:48:52 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:22.977 04:48:52 -- setup/devices.sh@192 -- # setup reset 00:06:22.977 04:48:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:22.977 04:48:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.544 04:48:53 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:23.544 04:48:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:23.544 04:48:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:23.544 04:48:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:23.544 04:48:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:23.544 04:48:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:23.544 04:48:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:23.544 04:48:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:23.544 04:48:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:23.544 04:48:53 -- setup/devices.sh@196 -- # blocks=() 00:06:23.544 04:48:53 -- setup/devices.sh@196 -- # declare -a blocks 00:06:23.544 04:48:53 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:23.544 04:48:53 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:23.544 04:48:53 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:23.544 04:48:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:23.544 04:48:53 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:23.544 04:48:53 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:23.544 04:48:53 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:06:23.544 04:48:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:06:23.544 04:48:53 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:23.544 04:48:53 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:06:23.544 04:48:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:23.544 No valid GPT data, bailing 00:06:23.544 04:48:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:23.544 04:48:53 -- scripts/common.sh@393 -- # pt= 00:06:23.544 04:48:53 -- scripts/common.sh@394 -- # return 1 00:06:23.544 04:48:53 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:23.544 04:48:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:23.544 04:48:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:23.544 04:48:53 -- setup/common.sh@80 -- # echo 5368709120 00:06:23.544 04:48:53 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:23.544 04:48:53 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:23.544 04:48:53 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:06:23.544 04:48:53 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:23.544 04:48:53 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:23.544 04:48:53 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:23.544 04:48:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.544 04:48:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.544 04:48:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.544 ************************************ 00:06:23.544 START TEST nvme_mount 00:06:23.544 ************************************ 00:06:23.544 04:48:53 -- common/autotest_common.sh@1104 -- # nvme_mount 00:06:23.544 04:48:53 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:23.544 04:48:53 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:23.544 04:48:53 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.544 04:48:53 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:23.544 04:48:53 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:23.544 04:48:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:23.544 04:48:53 -- setup/common.sh@40 -- # local part_no=1 00:06:23.544 04:48:53 -- setup/common.sh@41 -- # local size=1073741824 00:06:23.544 04:48:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:23.544 04:48:53 -- setup/common.sh@44 -- # parts=() 00:06:23.544 04:48:53 -- setup/common.sh@44 -- # local parts 00:06:23.544 04:48:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:23.544 04:48:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.544 04:48:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:23.544 04:48:53 -- setup/common.sh@46 -- # (( part++ )) 00:06:23.544 04:48:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.544 04:48:53 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:23.544 04:48:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:23.544 04:48:53 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:24.481 Creating new GPT entries in memory. 00:06:24.481 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:24.481 other utilities. 00:06:24.481 04:48:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:24.481 04:48:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:24.481 04:48:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:24.481 04:48:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:24.481 04:48:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:25.417 Creating new GPT entries in memory. 00:06:25.417 The operation has completed successfully. 00:06:25.417 04:48:55 -- setup/common.sh@57 -- # (( part++ )) 00:06:25.417 04:48:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:25.417 04:48:55 -- setup/common.sh@62 -- # wait 110215 00:06:25.677 04:48:55 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.677 04:48:55 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:25.677 04:48:55 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.677 04:48:55 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:25.677 04:48:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:25.677 04:48:55 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.677 04:48:55 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:25.677 04:48:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:25.677 04:48:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:25.677 04:48:55 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.677 04:48:55 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:25.677 04:48:55 -- setup/devices.sh@53 -- # local found=0 00:06:25.677 04:48:55 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:25.677 04:48:55 -- setup/devices.sh@56 -- # : 00:06:25.677 04:48:55 -- setup/devices.sh@59 -- # local pci status 00:06:25.677 04:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.677 04:48:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:25.677 04:48:55 -- setup/devices.sh@47 -- # setup output config 00:06:25.677 04:48:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.677 04:48:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:25.677 04:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:25.677 04:48:55 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:25.677 04:48:55 -- setup/devices.sh@63 -- # found=1 00:06:25.677 04:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.677 04:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:25.677 04:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.937 04:48:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:25.937 04:48:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.839 04:48:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:27.839 04:48:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:27.839 04:48:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:27.839 04:48:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.839 04:48:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:27.839 04:48:57 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:27.839 04:48:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:27.839 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:27.839 04:48:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:27.839 04:48:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:27.839 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:27.839 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:27.839 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:27.839 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:27.839 04:48:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:27.839 04:48:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:27.839 04:48:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:27.839 04:48:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:27.839 04:48:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.839 04:48:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:27.839 04:48:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:27.839 04:48:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:27.839 04:48:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:27.839 04:48:57 -- setup/devices.sh@53 -- # local found=0 00:06:27.839 04:48:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:27.839 04:48:57 -- setup/devices.sh@56 -- # : 00:06:27.839 04:48:57 -- setup/devices.sh@59 -- # local pci status 00:06:27.839 04:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.839 04:48:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:27.839 04:48:57 -- setup/devices.sh@47 -- # setup output config 00:06:27.839 04:48:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:27.839 04:48:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:27.839 04:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:27.839 04:48:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:27.839 04:48:57 -- setup/devices.sh@63 -- # found=1 00:06:27.839 04:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.839 04:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:27.839 04:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.098 04:48:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:28.098 04:48:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.003 04:48:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:30.003 04:48:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:30.003 04:48:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:30.003 04:48:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:30.003 04:48:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:30.003 04:48:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:30.003 04:48:59 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:06:30.003 04:48:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:30.003 04:48:59 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:30.003 04:48:59 -- setup/devices.sh@50 -- # local mount_point= 00:06:30.003 04:48:59 -- setup/devices.sh@51 -- # local test_file= 00:06:30.003 04:48:59 -- setup/devices.sh@53 -- # local found=0 00:06:30.003 04:48:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:30.003 04:48:59 -- setup/devices.sh@59 -- # local pci status 00:06:30.003 04:48:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.003 04:48:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:30.003 04:48:59 -- setup/devices.sh@47 -- # setup output config 00:06:30.003 04:48:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.003 04:48:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:30.003 04:48:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:30.003 04:48:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:30.003 04:48:59 -- setup/devices.sh@63 -- # found=1 00:06:30.003 04:48:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.262 04:48:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:30.262 04:48:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.262 04:49:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:30.262 04:49:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.168 04:49:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:32.168 04:49:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:32.168 04:49:01 -- setup/devices.sh@68 -- # return 0 00:06:32.168 04:49:01 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:32.168 04:49:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:32.168 04:49:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:32.168 04:49:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:32.168 04:49:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:32.168 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:32.168 00:06:32.168 real 0m8.635s 00:06:32.168 user 0m0.750s 00:06:32.168 sys 0m5.900s 00:06:32.168 04:49:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.168 04:49:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.168 ************************************ 00:06:32.168 END TEST nvme_mount 00:06:32.168 ************************************ 00:06:32.168 04:49:01 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:32.168 04:49:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.168 04:49:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.168 04:49:01 -- common/autotest_common.sh@10 -- # set +x 00:06:32.168 ************************************ 00:06:32.168 START TEST dm_mount 00:06:32.168 ************************************ 00:06:32.168 04:49:01 -- common/autotest_common.sh@1104 -- # dm_mount 00:06:32.168 04:49:01 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:32.168 04:49:01 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:32.168 04:49:01 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:32.168 04:49:01 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:32.168 04:49:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:32.168 04:49:01 -- setup/common.sh@40 -- # local part_no=2 00:06:32.168 04:49:01 -- setup/common.sh@41 -- # local size=1073741824 00:06:32.168 04:49:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:32.168 04:49:01 -- setup/common.sh@44 -- # parts=() 00:06:32.168 04:49:01 -- setup/common.sh@44 -- # local parts 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:32.168 04:49:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part++ )) 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:32.168 04:49:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part++ )) 00:06:32.168 04:49:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:32.168 04:49:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:32.168 04:49:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:32.168 04:49:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:33.103 Creating new GPT entries in memory. 00:06:33.103 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:33.103 other utilities. 00:06:33.103 04:49:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:33.103 04:49:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:33.103 04:49:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:33.103 04:49:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:33.103 04:49:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:34.479 Creating new GPT entries in memory. 00:06:34.479 The operation has completed successfully. 00:06:34.479 04:49:03 -- setup/common.sh@57 -- # (( part++ )) 00:06:34.479 04:49:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:34.479 04:49:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:34.479 04:49:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:34.479 04:49:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:35.413 The operation has completed successfully. 00:06:35.413 04:49:05 -- setup/common.sh@57 -- # (( part++ )) 00:06:35.413 04:49:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:35.413 04:49:05 -- setup/common.sh@62 -- # wait 110726 00:06:35.413 04:49:05 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:35.413 04:49:05 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:35.413 04:49:05 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:35.413 04:49:05 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:35.413 04:49:05 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:35.413 04:49:05 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:35.413 04:49:05 -- setup/devices.sh@161 -- # break 00:06:35.413 04:49:05 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:35.413 04:49:05 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:35.413 04:49:05 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:35.413 04:49:05 -- setup/devices.sh@166 -- # dm=dm-0 00:06:35.413 04:49:05 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:35.413 04:49:05 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:35.413 04:49:05 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:35.413 04:49:05 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:35.413 04:49:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:35.413 04:49:05 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:35.413 04:49:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:35.413 04:49:05 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:35.413 04:49:05 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:35.413 04:49:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:35.413 04:49:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:35.413 04:49:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:35.413 04:49:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:35.413 04:49:05 -- setup/devices.sh@53 -- # local found=0 00:06:35.413 04:49:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:35.413 04:49:05 -- setup/devices.sh@56 -- # : 00:06:35.413 04:49:05 -- setup/devices.sh@59 -- # local pci status 00:06:35.413 04:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.413 04:49:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:35.413 04:49:05 -- setup/devices.sh@47 -- # setup output config 00:06:35.413 04:49:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:35.413 04:49:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:35.672 04:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:35.672 04:49:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:35.672 04:49:05 -- setup/devices.sh@63 -- # found=1 00:06:35.672 04:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.672 04:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:35.672 04:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.672 04:49:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:35.672 04:49:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.576 04:49:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:37.576 04:49:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:37.576 04:49:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:37.576 04:49:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:37.576 04:49:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:37.576 04:49:07 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:37.576 04:49:07 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:37.577 04:49:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:37.577 04:49:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:37.577 04:49:07 -- setup/devices.sh@50 -- # local mount_point= 00:06:37.577 04:49:07 -- setup/devices.sh@51 -- # local test_file= 00:06:37.577 04:49:07 -- setup/devices.sh@53 -- # local found=0 00:06:37.577 04:49:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:37.577 04:49:07 -- setup/devices.sh@59 -- # local pci status 00:06:37.577 04:49:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.577 04:49:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:37.577 04:49:07 -- setup/devices.sh@47 -- # setup output config 00:06:37.577 04:49:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:37.577 04:49:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:37.836 04:49:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:37.836 04:49:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:37.836 04:49:07 -- setup/devices.sh@63 -- # found=1 00:06:37.836 04:49:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.836 04:49:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:37.836 04:49:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.836 04:49:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:37.836 04:49:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.740 04:49:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:39.740 04:49:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:39.740 04:49:09 -- setup/devices.sh@68 -- # return 0 00:06:39.740 04:49:09 -- setup/devices.sh@187 -- # cleanup_dm 00:06:39.740 04:49:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:39.740 04:49:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:39.740 04:49:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:39.740 04:49:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:39.740 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:39.740 04:49:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:39.740 00:06:39.740 real 0m7.458s 00:06:39.740 user 0m0.514s 00:06:39.740 sys 0m3.786s 00:06:39.740 04:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.740 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.740 ************************************ 00:06:39.740 END TEST dm_mount 00:06:39.740 ************************************ 00:06:39.740 04:49:09 -- setup/devices.sh@1 -- # cleanup 00:06:39.740 04:49:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:39.740 04:49:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:39.740 04:49:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:39.740 04:49:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:39.740 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:39.740 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:39.740 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:39.740 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:39.740 04:49:09 -- setup/devices.sh@12 -- # cleanup_dm 00:06:39.740 04:49:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:39.740 04:49:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:39.740 04:49:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:39.740 04:49:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:39.740 00:06:39.740 real 0m16.873s 00:06:39.740 user 0m1.688s 00:06:39.740 sys 0m10.023s 00:06:39.740 04:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.740 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.740 ************************************ 00:06:39.740 END TEST devices 00:06:39.740 ************************************ 00:06:39.740 00:06:39.740 real 0m34.672s 00:06:39.740 user 0m6.530s 00:06:39.740 sys 0m23.198s 00:06:39.740 04:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.740 ************************************ 00:06:39.740 04:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.740 END TEST setup.sh 00:06:39.740 ************************************ 00:06:39.740 04:49:09 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:39.998 Hugepages 00:06:39.998 node hugesize free / total 00:06:39.998 node0 1048576kB 0 / 0 00:06:39.998 node0 2048kB 2048 / 2048 00:06:39.998 00:06:39.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:39.998 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:40.256 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:40.256 04:49:09 -- spdk/autotest.sh@141 -- # uname -s 00:06:40.256 04:49:09 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:06:40.256 04:49:09 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:06:40.256 04:49:09 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:40.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:40.514 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.064 04:49:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:43.631 04:49:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:43.631 04:49:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:43.631 04:49:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:43.631 04:49:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:43.631 04:49:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:43.631 04:49:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:43.631 04:49:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:43.631 04:49:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.631 04:49:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:43.631 04:49:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:43.631 04:49:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:43.631 04:49:13 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:43.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:43.889 Waiting for block devices as requested 00:06:43.889 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.148 04:49:13 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:44.148 04:49:13 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:06:44.148 04:49:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:44.148 04:49:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:44.148 04:49:13 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:44.148 04:49:13 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:44.148 04:49:13 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:06:44.148 04:49:13 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:44.148 04:49:13 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:44.148 04:49:13 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:44.148 04:49:13 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:44.148 04:49:13 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:44.148 04:49:13 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:44.148 04:49:13 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:44.148 04:49:13 -- common/autotest_common.sh@1542 -- # continue 00:06:44.148 04:49:13 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:44.148 04:49:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:44.148 04:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.148 04:49:13 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:44.148 04:49:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:44.148 04:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.148 04:49:13 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:44.664 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.569 04:49:16 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:46.569 04:49:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:46.569 04:49:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 04:49:16 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:46.569 04:49:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:46.569 04:49:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:46.569 04:49:16 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:46.569 04:49:16 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:46.569 04:49:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:46.569 04:49:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:46.569 04:49:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:46.569 04:49:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:46.569 04:49:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:46.569 04:49:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:46.569 04:49:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:46.569 04:49:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:46.569 04:49:16 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:46.569 04:49:16 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:46.569 04:49:16 -- common/autotest_common.sh@1565 -- # device=0x0010 00:06:46.569 04:49:16 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:46.569 04:49:16 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:06:46.569 04:49:16 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:46.569 04:49:16 -- common/autotest_common.sh@1578 -- # return 0 00:06:46.569 04:49:16 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:06:46.569 04:49:16 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:46.569 04:49:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.569 04:49:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.569 04:49:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 ************************************ 00:06:46.569 START TEST unittest 00:06:46.569 ************************************ 00:06:46.569 04:49:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:46.569 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:46.569 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:46.569 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:46.569 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:46.569 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:46.569 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:46.569 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:46.569 ++ rpc_py=rpc_cmd 00:06:46.569 ++ set -e 00:06:46.569 ++ shopt -s nullglob 00:06:46.569 ++ shopt -s extglob 00:06:46.569 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:46.569 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:46.569 +++ CONFIG_WPDK_DIR= 00:06:46.569 +++ CONFIG_ASAN=y 00:06:46.569 +++ CONFIG_VBDEV_COMPRESS=n 00:06:46.569 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:46.569 +++ CONFIG_USDT=n 00:06:46.569 +++ CONFIG_CUSTOMOCF=n 00:06:46.569 +++ CONFIG_PREFIX=/usr/local 00:06:46.569 +++ CONFIG_RBD=n 00:06:46.569 +++ CONFIG_LIBDIR= 00:06:46.569 +++ CONFIG_IDXD=y 00:06:46.569 +++ CONFIG_NVME_CUSE=y 00:06:46.569 +++ CONFIG_SMA=n 00:06:46.569 +++ CONFIG_VTUNE=n 00:06:46.569 +++ CONFIG_TSAN=n 00:06:46.569 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.569 +++ CONFIG_VFIO_USER_DIR= 00:06:46.569 +++ CONFIG_PGO_CAPTURE=n 00:06:46.569 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.569 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:46.569 +++ CONFIG_LTO=n 00:06:46.569 +++ CONFIG_ISCSI_INITIATOR=y 00:06:46.569 +++ CONFIG_CET=n 00:06:46.569 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.569 +++ CONFIG_OCF_PATH= 00:06:46.569 +++ CONFIG_RDMA_SET_TOS=y 00:06:46.569 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:46.569 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:46.569 +++ CONFIG_UBLK=n 00:06:46.569 +++ CONFIG_ISAL_CRYPTO=y 00:06:46.569 +++ CONFIG_OPENSSL_PATH= 00:06:46.569 +++ CONFIG_OCF=n 00:06:46.569 +++ CONFIG_FUSE=n 00:06:46.569 +++ CONFIG_VTUNE_DIR= 00:06:46.569 +++ CONFIG_FUZZER_LIB= 00:06:46.569 +++ CONFIG_FUZZER=n 00:06:46.569 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:46.569 +++ CONFIG_CRYPTO=n 00:06:46.569 +++ CONFIG_PGO_USE=n 00:06:46.569 +++ CONFIG_VHOST=y 00:06:46.569 +++ CONFIG_DAOS=n 00:06:46.569 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:46.569 +++ CONFIG_DAOS_DIR= 00:06:46.569 +++ CONFIG_UNIT_TESTS=y 00:06:46.569 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.569 +++ CONFIG_VIRTIO=y 00:06:46.569 +++ CONFIG_COVERAGE=y 00:06:46.569 +++ CONFIG_RDMA=y 00:06:46.569 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.569 +++ CONFIG_URING_PATH= 00:06:46.569 +++ CONFIG_XNVME=n 00:06:46.569 +++ CONFIG_VFIO_USER=n 00:06:46.569 +++ CONFIG_ARCH=native 00:06:46.569 +++ CONFIG_URING_ZNS=n 00:06:46.569 +++ CONFIG_WERROR=y 00:06:46.569 +++ CONFIG_HAVE_LIBBSD=n 00:06:46.569 +++ CONFIG_UBSAN=y 00:06:46.569 +++ CONFIG_IPSEC_MB_DIR= 00:06:46.569 +++ CONFIG_GOLANG=n 00:06:46.569 +++ CONFIG_ISAL=y 00:06:46.569 +++ CONFIG_IDXD_KERNEL=n 00:06:46.569 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:46.569 +++ CONFIG_RDMA_PROV=verbs 00:06:46.569 +++ CONFIG_APPS=y 00:06:46.569 +++ CONFIG_SHARED=n 00:06:46.569 +++ CONFIG_FC_PATH= 00:06:46.569 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:46.569 +++ CONFIG_FC=n 00:06:46.569 +++ CONFIG_AVAHI=n 00:06:46.569 +++ CONFIG_FIO_PLUGIN=y 00:06:46.569 +++ CONFIG_RAID5F=y 00:06:46.569 +++ CONFIG_EXAMPLES=y 00:06:46.570 +++ CONFIG_TESTS=y 00:06:46.570 +++ CONFIG_CRYPTO_MLX5=n 00:06:46.570 +++ CONFIG_MAX_LCORES= 00:06:46.570 +++ CONFIG_IPSEC_MB=n 00:06:46.570 +++ CONFIG_DEBUG=y 00:06:46.570 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:46.570 +++ CONFIG_CROSS_PREFIX= 00:06:46.570 +++ CONFIG_URING=n 00:06:46.570 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:46.570 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:46.570 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:46.570 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:46.570 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:46.570 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:46.570 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:46.570 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:46.570 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:46.570 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:46.570 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:46.570 +++ VHOST_APP=("$_app_dir/vhost") 00:06:46.570 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:46.570 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:46.570 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:46.570 +++ [[ #ifndef SPDK_CONFIG_H 00:06:46.570 #define SPDK_CONFIG_H 00:06:46.570 #define SPDK_CONFIG_APPS 1 00:06:46.570 #define SPDK_CONFIG_ARCH native 00:06:46.570 #define SPDK_CONFIG_ASAN 1 00:06:46.570 #undef SPDK_CONFIG_AVAHI 00:06:46.570 #undef SPDK_CONFIG_CET 00:06:46.570 #define SPDK_CONFIG_COVERAGE 1 00:06:46.570 #define SPDK_CONFIG_CROSS_PREFIX 00:06:46.570 #undef SPDK_CONFIG_CRYPTO 00:06:46.570 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:46.570 #undef SPDK_CONFIG_CUSTOMOCF 00:06:46.570 #undef SPDK_CONFIG_DAOS 00:06:46.570 #define SPDK_CONFIG_DAOS_DIR 00:06:46.570 #define SPDK_CONFIG_DEBUG 1 00:06:46.570 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:46.570 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:46.570 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:46.570 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:46.570 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:46.570 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:46.570 #define SPDK_CONFIG_EXAMPLES 1 00:06:46.570 #undef SPDK_CONFIG_FC 00:06:46.570 #define SPDK_CONFIG_FC_PATH 00:06:46.570 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:46.570 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:46.570 #undef SPDK_CONFIG_FUSE 00:06:46.570 #undef SPDK_CONFIG_FUZZER 00:06:46.570 #define SPDK_CONFIG_FUZZER_LIB 00:06:46.570 #undef SPDK_CONFIG_GOLANG 00:06:46.570 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:46.570 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:46.570 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:46.570 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:46.570 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:46.570 #define SPDK_CONFIG_IDXD 1 00:06:46.570 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:46.570 #undef SPDK_CONFIG_IPSEC_MB 00:06:46.570 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:46.570 #define SPDK_CONFIG_ISAL 1 00:06:46.570 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:46.570 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:46.570 #define SPDK_CONFIG_LIBDIR 00:06:46.570 #undef SPDK_CONFIG_LTO 00:06:46.570 #define SPDK_CONFIG_MAX_LCORES 00:06:46.570 #define SPDK_CONFIG_NVME_CUSE 1 00:06:46.570 #undef SPDK_CONFIG_OCF 00:06:46.570 #define SPDK_CONFIG_OCF_PATH 00:06:46.570 #define SPDK_CONFIG_OPENSSL_PATH 00:06:46.570 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:46.570 #undef SPDK_CONFIG_PGO_USE 00:06:46.570 #define SPDK_CONFIG_PREFIX /usr/local 00:06:46.570 #define SPDK_CONFIG_RAID5F 1 00:06:46.570 #undef SPDK_CONFIG_RBD 00:06:46.570 #define SPDK_CONFIG_RDMA 1 00:06:46.570 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:46.570 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:46.570 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:46.570 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:46.570 #undef SPDK_CONFIG_SHARED 00:06:46.570 #undef SPDK_CONFIG_SMA 00:06:46.570 #define SPDK_CONFIG_TESTS 1 00:06:46.570 #undef SPDK_CONFIG_TSAN 00:06:46.570 #undef SPDK_CONFIG_UBLK 00:06:46.570 #define SPDK_CONFIG_UBSAN 1 00:06:46.570 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:46.570 #undef SPDK_CONFIG_URING 00:06:46.570 #define SPDK_CONFIG_URING_PATH 00:06:46.570 #undef SPDK_CONFIG_URING_ZNS 00:06:46.570 #undef SPDK_CONFIG_USDT 00:06:46.570 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:46.570 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:46.570 #undef SPDK_CONFIG_VFIO_USER 00:06:46.570 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:46.570 #define SPDK_CONFIG_VHOST 1 00:06:46.570 #define SPDK_CONFIG_VIRTIO 1 00:06:46.570 #undef SPDK_CONFIG_VTUNE 00:06:46.570 #define SPDK_CONFIG_VTUNE_DIR 00:06:46.570 #define SPDK_CONFIG_WERROR 1 00:06:46.570 #define SPDK_CONFIG_WPDK_DIR 00:06:46.570 #undef SPDK_CONFIG_XNVME 00:06:46.570 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:46.570 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:46.570 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.570 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:46.570 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.570 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.570 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:46.570 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:46.570 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:46.570 ++++ export PATH 00:06:46.570 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:46.570 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:46.570 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:46.570 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:46.570 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:46.570 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:46.570 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:46.570 +++ TEST_TAG=N/A 00:06:46.570 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:46.570 ++ : 1 00:06:46.570 ++ export RUN_NIGHTLY 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_RUN_VALGRIND 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_TEST_UNITTEST 00:06:46.570 ++ : 00:06:46.570 ++ export SPDK_TEST_AUTOBUILD 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_RELEASE_BUILD 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_ISAL 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_ISCSI 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_TEST_NVME 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVME_PMR 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVME_BP 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVME_CLI 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVME_CUSE 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVME_FDP 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_NVMF 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VFIOUSER 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_FUZZER 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_FUZZER_SHORT 00:06:46.570 ++ : rdma 00:06:46.570 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_RBD 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VHOST 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_TEST_BLOCKDEV 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_IOAT 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_BLOBFS 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VHOST_INIT 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_LVOL 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_RUN_ASAN 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_RUN_UBSAN 00:06:46.570 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:46.570 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_RUN_NON_ROOT 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_CRYPTO 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_FTL 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_OCF 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_VMD 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_OPAL 00:06:46.570 ++ : v23.11 00:06:46.570 ++ export SPDK_TEST_NATIVE_DPDK 00:06:46.570 ++ : true 00:06:46.570 ++ export SPDK_AUTOTEST_X 00:06:46.570 ++ : 1 00:06:46.570 ++ export SPDK_TEST_RAID5 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_URING 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_USDT 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_USE_IGB_UIO 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_SCHEDULER 00:06:46.570 ++ : 0 00:06:46.570 ++ export SPDK_TEST_SCANBUILD 00:06:46.570 ++ : 00:06:46.570 ++ export SPDK_TEST_NVMF_NICS 00:06:46.570 ++ : 0 00:06:46.571 ++ export SPDK_TEST_SMA 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_DAOS 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_XNVME 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_ACCEL_DSA 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_ACCEL_IAA 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_ACCEL_IOAT 00:06:46.571 ++ : 00:06:46.571 ++ export SPDK_TEST_FUZZER_TARGET 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_TEST_NVMF_MDNS 00:06:46.571 ++ : 0 00:06:46.571 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:46.571 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:46.571 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:46.571 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:46.571 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:46.571 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:46.571 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:46.571 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:46.571 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:46.571 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.571 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:46.571 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:46.571 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:46.571 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:46.571 ++ PYTHONDONTWRITEBYTECODE=1 00:06:46.571 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.571 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:46.571 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.571 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:46.571 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:46.571 ++ rm -rf /var/tmp/asan_suppression_file 00:06:46.571 ++ cat 00:06:46.571 ++ echo leak:libfuse3.so 00:06:46.571 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.571 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:46.571 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.571 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:46.571 ++ '[' -z /var/spdk/dependencies ']' 00:06:46.571 ++ export DEPENDENCY_DIR 00:06:46.571 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:46.571 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:46.571 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:46.571 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:46.571 ++ export QEMU_BIN= 00:06:46.571 ++ QEMU_BIN= 00:06:46.571 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:46.571 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:46.571 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:46.571 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:46.571 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.571 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:46.571 ++ '[' 0 -eq 0 ']' 00:06:46.571 ++ export valgrind= 00:06:46.571 ++ valgrind= 00:06:46.571 +++ uname -s 00:06:46.571 ++ '[' Linux = Linux ']' 00:06:46.571 ++ HUGEMEM=4096 00:06:46.571 ++ export CLEAR_HUGE=yes 00:06:46.571 ++ CLEAR_HUGE=yes 00:06:46.571 ++ [[ 0 -eq 1 ]] 00:06:46.571 ++ [[ 0 -eq 1 ]] 00:06:46.571 ++ MAKE=make 00:06:46.571 +++ nproc 00:06:46.571 ++ MAKEFLAGS=-j10 00:06:46.571 ++ export HUGEMEM=4096 00:06:46.571 ++ HUGEMEM=4096 00:06:46.571 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:46.571 ++ NO_HUGE=() 00:06:46.571 ++ TEST_MODE= 00:06:46.571 ++ [[ -z '' ]] 00:06:46.571 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:46.571 ++ exec 00:06:46.571 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:46.571 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:46.571 ++ set_test_storage 2147483648 00:06:46.571 ++ [[ -v testdir ]] 00:06:46.571 ++ local requested_size=2147483648 00:06:46.571 ++ local mount target_dir 00:06:46.571 ++ local -A mounts fss sizes avails uses 00:06:46.571 ++ local source fs size avail mount use 00:06:46.571 ++ local storage_fallback storage_candidates 00:06:46.571 +++ mktemp -udt spdk.XXXXXX 00:06:46.571 ++ storage_fallback=/tmp/spdk.gZ7qcs 00:06:46.571 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:46.571 ++ [[ -n '' ]] 00:06:46.571 ++ [[ -n '' ]] 00:06:46.571 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.gZ7qcs/tests/unit /tmp/spdk.gZ7qcs 00:06:46.571 ++ requested_size=2214592512 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 +++ df -T 00:06:46.571 +++ grep -v Filesystem 00:06:46.571 ++ mounts["$mount"]=tmpfs 00:06:46.571 ++ fss["$mount"]=tmpfs 00:06:46.571 ++ avails["$mount"]=1252610048 00:06:46.571 ++ sizes["$mount"]=1253683200 00:06:46.571 ++ uses["$mount"]=1073152 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=/dev/vda1 00:06:46.571 ++ fss["$mount"]=ext4 00:06:46.571 ++ avails["$mount"]=9161334784 00:06:46.571 ++ sizes["$mount"]=20616794112 00:06:46.571 ++ uses["$mount"]=11438682112 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=tmpfs 00:06:46.571 ++ fss["$mount"]=tmpfs 00:06:46.571 ++ avails["$mount"]=6268403712 00:06:46.571 ++ sizes["$mount"]=6268403712 00:06:46.571 ++ uses["$mount"]=0 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=tmpfs 00:06:46.571 ++ fss["$mount"]=tmpfs 00:06:46.571 ++ avails["$mount"]=5242880 00:06:46.571 ++ sizes["$mount"]=5242880 00:06:46.571 ++ uses["$mount"]=0 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=/dev/vda15 00:06:46.571 ++ fss["$mount"]=vfat 00:06:46.571 ++ avails["$mount"]=103061504 00:06:46.571 ++ sizes["$mount"]=109395968 00:06:46.571 ++ uses["$mount"]=6334464 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=tmpfs 00:06:46.571 ++ fss["$mount"]=tmpfs 00:06:46.571 ++ avails["$mount"]=1253675008 00:06:46.571 ++ sizes["$mount"]=1253679104 00:06:46.571 ++ uses["$mount"]=4096 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:06:46.571 ++ fss["$mount"]=fuse.sshfs 00:06:46.571 ++ avails["$mount"]=94017433600 00:06:46.571 ++ sizes["$mount"]=105088212992 00:06:46.571 ++ uses["$mount"]=5685346304 00:06:46.571 ++ read -r source fs size use avail _ mount 00:06:46.571 ++ printf '* Looking for test storage...\n' 00:06:46.571 * Looking for test storage... 00:06:46.571 ++ local target_space new_size 00:06:46.571 ++ for target_dir in "${storage_candidates[@]}" 00:06:46.571 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:46.571 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:46.571 ++ mount=/ 00:06:46.571 ++ target_space=9161334784 00:06:46.571 ++ (( target_space == 0 || target_space < requested_size )) 00:06:46.571 ++ (( target_space >= requested_size )) 00:06:46.571 ++ [[ ext4 == tmpfs ]] 00:06:46.571 ++ [[ ext4 == ramfs ]] 00:06:46.571 ++ [[ / == / ]] 00:06:46.571 ++ new_size=13653274624 00:06:46.571 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:46.571 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:46.571 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:46.571 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:46.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:46.571 ++ return 0 00:06:46.571 ++ set -o errtrace 00:06:46.571 ++ shopt -s extdebug 00:06:46.571 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:46.571 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:46.571 04:49:16 -- common/autotest_common.sh@1672 -- # true 00:06:46.571 04:49:16 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:46.571 04:49:16 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:46.571 04:49:16 -- common/autotest_common.sh@29 -- # exec 00:06:46.571 04:49:16 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:46.571 04:49:16 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:46.571 04:49:16 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:46.571 04:49:16 -- common/autotest_common.sh@18 -- # set -x 00:06:46.571 04:49:16 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:46.571 04:49:16 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:46.571 04:49:16 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:46.571 04:49:16 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:46.571 04:49:16 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:46.571 04:49:16 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:06:46.571 04:49:16 -- unit/unittest.sh@179 -- # hash lcov 00:06:46.571 04:49:16 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:46.571 04:49:16 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:46.571 04:49:16 -- unit/unittest.sh@180 -- # cov_avail=yes 00:06:46.571 04:49:16 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:06:46.571 04:49:16 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:46.571 04:49:16 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:46.571 04:49:16 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:46.571 04:49:16 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:06:46.571 --rc lcov_branch_coverage=1 00:06:46.571 --rc lcov_function_coverage=1 00:06:46.571 --rc genhtml_branch_coverage=1 00:06:46.571 --rc genhtml_function_coverage=1 00:06:46.572 --rc genhtml_legend=1 00:06:46.572 --rc geninfo_all_blocks=1 00:06:46.572 ' 00:06:46.572 04:49:16 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:06:46.572 --rc lcov_branch_coverage=1 00:06:46.572 --rc lcov_function_coverage=1 00:06:46.572 --rc genhtml_branch_coverage=1 00:06:46.572 --rc genhtml_function_coverage=1 00:06:46.572 --rc genhtml_legend=1 00:06:46.572 --rc geninfo_all_blocks=1 00:06:46.572 ' 00:06:46.572 04:49:16 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:06:46.572 --rc lcov_branch_coverage=1 00:06:46.572 --rc lcov_function_coverage=1 00:06:46.572 --rc genhtml_branch_coverage=1 00:06:46.572 --rc genhtml_function_coverage=1 00:06:46.572 --rc genhtml_legend=1 00:06:46.572 --rc geninfo_all_blocks=1 00:06:46.572 --no-external' 00:06:46.572 04:49:16 -- unit/unittest.sh@200 -- # LCOV='lcov 00:06:46.572 --rc lcov_branch_coverage=1 00:06:46.572 --rc lcov_function_coverage=1 00:06:46.572 --rc genhtml_branch_coverage=1 00:06:46.572 --rc genhtml_function_coverage=1 00:06:46.572 --rc genhtml_legend=1 00:06:46.572 --rc geninfo_all_blocks=1 00:06:46.572 --no-external' 00:06:46.572 04:49:16 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:07:04.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:04.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:04.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:04.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:04.732 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:04.732 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:31.277 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:31.277 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:31.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:31.278 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:31.279 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:31.279 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:32.218 04:50:01 -- unit/unittest.sh@206 -- # uname -m 00:07:32.218 04:50:01 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:07:32.218 04:50:01 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:32.218 04:50:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.218 04:50:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.218 04:50:01 -- common/autotest_common.sh@10 -- # set +x 00:07:32.218 ************************************ 00:07:32.218 START TEST unittest_pci_event 00:07:32.218 ************************************ 00:07:32.218 04:50:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:32.218 00:07:32.218 00:07:32.218 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.218 http://cunit.sourceforge.net/ 00:07:32.218 00:07:32.218 00:07:32.218 Suite: pci_event 00:07:32.218 Test: test_pci_parse_event ...[2024-04-27 04:50:02.025666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:32.218 passed 00:07:32.218 00:07:32.218 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.218 suites 1 1 n/a 0 0 00:07:32.218 tests 1 1 1 0 0 00:07:32.218 asserts 15 15 15 0 n/a 00:07:32.218 00:07:32.218 Elapsed time = 0.001 seconds 00:07:32.218 [2024-04-27 04:50:02.026605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:32.218 00:07:32.218 real 0m0.040s 00:07:32.218 user 0m0.017s 00:07:32.218 sys 0m0.017s 00:07:32.218 04:50:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.218 ************************************ 00:07:32.218 END TEST unittest_pci_event 00:07:32.218 04:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.218 ************************************ 00:07:32.218 04:50:02 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:32.218 04:50:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.218 04:50:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.218 04:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.219 ************************************ 00:07:32.219 START TEST unittest_include 00:07:32.219 ************************************ 00:07:32.219 04:50:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:32.219 00:07:32.219 00:07:32.219 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.219 http://cunit.sourceforge.net/ 00:07:32.219 00:07:32.219 00:07:32.219 Suite: histogram 00:07:32.219 Test: histogram_test ...passed 00:07:32.479 Test: histogram_merge ...passed 00:07:32.479 00:07:32.479 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.479 suites 1 1 n/a 0 0 00:07:32.479 tests 2 2 2 0 0 00:07:32.479 asserts 50 50 50 0 n/a 00:07:32.479 00:07:32.479 Elapsed time = 0.006 seconds 00:07:32.479 00:07:32.479 real 0m0.035s 00:07:32.479 user 0m0.017s 00:07:32.479 sys 0m0.019s 00:07:32.479 04:50:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.479 ************************************ 00:07:32.479 END TEST unittest_include 00:07:32.479 ************************************ 00:07:32.479 04:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.479 04:50:02 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:07:32.479 04:50:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.479 04:50:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.479 04:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.479 ************************************ 00:07:32.479 START TEST unittest_bdev 00:07:32.479 ************************************ 00:07:32.479 04:50:02 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:07:32.479 04:50:02 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:32.479 00:07:32.479 00:07:32.479 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.479 http://cunit.sourceforge.net/ 00:07:32.479 00:07:32.479 00:07:32.479 Suite: bdev 00:07:32.479 Test: bytes_to_blocks_test ...passed 00:07:32.479 Test: num_blocks_test ...passed 00:07:32.479 Test: io_valid_test ...passed 00:07:32.479 Test: open_write_test ...[2024-04-27 04:50:02.274786] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:32.479 [2024-04-27 04:50:02.275153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:32.479 [2024-04-27 04:50:02.275299] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:32.479 passed 00:07:32.479 Test: claim_test ...passed 00:07:32.479 Test: alias_add_del_test ...[2024-04-27 04:50:02.358209] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:32.479 [2024-04-27 04:50:02.358375] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:32.479 [2024-04-27 04:50:02.358439] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:32.739 passed 00:07:32.739 Test: get_device_stat_test ...passed 00:07:32.739 Test: bdev_io_types_test ...passed 00:07:32.739 Test: bdev_io_wait_test ...passed 00:07:32.739 Test: bdev_io_spans_split_test ...passed 00:07:32.739 Test: bdev_io_boundary_split_test ...passed 00:07:32.739 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-27 04:50:02.520864] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:32.739 passed 00:07:32.739 Test: bdev_io_mix_split_test ...passed 00:07:32.739 Test: bdev_io_split_with_io_wait ...passed 00:07:32.998 Test: bdev_io_write_unit_split_test ...[2024-04-27 04:50:02.643015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:32.999 [2024-04-27 04:50:02.643171] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:32.999 [2024-04-27 04:50:02.643217] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:32.999 [2024-04-27 04:50:02.643262] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:32.999 passed 00:07:32.999 Test: bdev_io_alignment_with_boundary ...passed 00:07:32.999 Test: bdev_io_alignment ...passed 00:07:32.999 Test: bdev_histograms ...passed 00:07:32.999 Test: bdev_write_zeroes ...passed 00:07:32.999 Test: bdev_compare_and_write ...passed 00:07:33.257 Test: bdev_compare ...passed 00:07:33.257 Test: bdev_compare_emulated ...passed 00:07:33.257 Test: bdev_zcopy_write ...passed 00:07:33.257 Test: bdev_zcopy_read ...passed 00:07:33.257 Test: bdev_open_while_hotremove ...passed 00:07:33.257 Test: bdev_close_while_hotremove ...passed 00:07:33.257 Test: bdev_open_ext_test ...[2024-04-27 04:50:03.057915] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:33.257 passed 00:07:33.257 Test: bdev_open_ext_unregister ...[2024-04-27 04:50:03.058121] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:33.257 passed 00:07:33.257 Test: bdev_set_io_timeout ...passed 00:07:33.257 Test: bdev_set_qd_sampling ...passed 00:07:33.257 Test: lba_range_overlap ...passed 00:07:33.516 Test: lock_lba_range_check_ranges ...passed 00:07:33.516 Test: lock_lba_range_with_io_outstanding ...passed 00:07:33.516 Test: lock_lba_range_overlapped ...passed 00:07:33.516 Test: bdev_quiesce ...[2024-04-27 04:50:03.245080] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:33.516 passed 00:07:33.516 Test: bdev_io_abort ...passed 00:07:33.516 Test: bdev_unmap ...passed 00:07:33.516 Test: bdev_write_zeroes_split_test ...passed 00:07:33.516 Test: bdev_set_options_test ...passed[2024-04-27 04:50:03.357690] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:33.516 00:07:33.516 Test: bdev_get_memory_domains ...passed 00:07:33.516 Test: bdev_io_ext ...passed 00:07:33.775 Test: bdev_io_ext_no_opts ...passed 00:07:33.775 Test: bdev_io_ext_invalid_opts ...passed 00:07:33.775 Test: bdev_io_ext_split ...passed 00:07:33.775 Test: bdev_io_ext_bounce_buffer ...passed 00:07:33.775 Test: bdev_register_uuid_alias ...[2024-04-27 04:50:03.539449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bf94da78-1a7b-438e-adaa-73ad0795dbdb already exists 00:07:33.775 [2024-04-27 04:50:03.539520] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:bf94da78-1a7b-438e-adaa-73ad0795dbdb alias for bdev bdev0 00:07:33.775 passed 00:07:33.775 Test: bdev_unregister_by_name ...[2024-04-27 04:50:03.560116] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:33.775 passed 00:07:33.775 Test: for_each_bdev_test ...[2024-04-27 04:50:03.560183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7839:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:33.775 passed 00:07:33.775 Test: bdev_seek_test ...passed 00:07:33.775 Test: bdev_copy ...passed 00:07:33.775 Test: bdev_copy_split_test ...passed 00:07:33.775 Test: examine_locks ...passed 00:07:33.775 Test: claim_v2_rwo ...[2024-04-27 04:50:03.661855] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.661966] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 passed 00:07:33.775 Test: claim_v2_rom ...[2024-04-27 04:50:03.661989] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662108] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8560:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:33.775 passed 00:07:33.775 Test: claim_v2_rwm ...[2024-04-27 04:50:03.662275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662335] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662362] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662387] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662435] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:33.775 [2024-04-27 04:50:03.662467] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:33.775 [2024-04-27 04:50:03.662592] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:33.775 [2024-04-27 04:50:03.662649] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662675] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662703] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662746] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.662783] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:33.775 passed 00:07:33.775 Test: claim_v2_existing_writer ...[2024-04-27 04:50:03.662910] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:33.775 [2024-04-27 04:50:03.662940] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:33.775 passed 00:07:33.775 Test: claim_v2_existing_v1 ...passed 00:07:33.775 Test: claim_v1_existing_v2 ...[2024-04-27 04:50:03.663047] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.663077] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.663094] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.663199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:33.775 [2024-04-27 04:50:03.663252] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:33.775 passed 00:07:33.775 Test: examine_claimed ...[2024-04-27 04:50:03.663285] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:33.775 passed 00:07:33.775 00:07:33.775 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.775 suites 1 1 n/a 0 0 00:07:33.775 tests 59 59 59 0 0 00:07:33.775 asserts 4599 4599 4599 0 n/a 00:07:33.775 00:07:33.775 Elapsed time = 1.466 seconds 00:07:33.775 [2024-04-27 04:50:03.663556] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:34.035 04:50:03 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:34.035 00:07:34.035 00:07:34.035 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.035 http://cunit.sourceforge.net/ 00:07:34.035 00:07:34.035 00:07:34.035 Suite: nvme 00:07:34.035 Test: test_create_ctrlr ...passed 00:07:34.035 Test: test_reset_ctrlr ...[2024-04-27 04:50:03.719870] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 passed 00:07:34.035 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:34.035 Test: test_failover_ctrlr ...passed 00:07:34.035 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-27 04:50:03.722659] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 [2024-04-27 04:50:03.722921] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 [2024-04-27 04:50:03.723152] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 passed 00:07:34.035 Test: test_pending_reset ...[2024-04-27 04:50:03.725258] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 [2024-04-27 04:50:03.725600] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 passed 00:07:34.035 Test: test_attach_ctrlr ...[2024-04-27 04:50:03.726796] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:34.035 passed 00:07:34.035 Test: test_aer_cb ...passed 00:07:34.035 Test: test_submit_nvme_cmd ...passed 00:07:34.035 Test: test_add_remove_trid ...passed 00:07:34.035 Test: test_abort ...[2024-04-27 04:50:03.730878] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:34.035 passed 00:07:34.035 Test: test_get_io_qpair ...passed 00:07:34.035 Test: test_bdev_unregister ...passed 00:07:34.035 Test: test_compare_ns ...passed 00:07:34.035 Test: test_init_ana_log_page ...passed 00:07:34.035 Test: test_get_memory_domains ...passed 00:07:34.035 Test: test_reconnect_qpair ...[2024-04-27 04:50:03.734122] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.035 passed 00:07:34.035 Test: test_create_bdev_ctrlr ...[2024-04-27 04:50:03.734748] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:34.035 passed 00:07:34.035 Test: test_add_multi_ns_to_bdev ...[2024-04-27 04:50:03.736264] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:34.035 passed 00:07:34.035 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:34.035 Test: test_admin_path ...passed 00:07:34.035 Test: test_reset_bdev_ctrlr ...passed 00:07:34.036 Test: test_find_io_path ...passed 00:07:34.036 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:34.036 Test: test_retry_io_for_io_path_error ...passed 00:07:34.036 Test: test_retry_io_count ...passed 00:07:34.036 Test: test_concurrent_read_ana_log_page ...passed 00:07:34.036 Test: test_retry_io_for_ana_error ...passed 00:07:34.036 Test: test_check_io_error_resiliency_params ...[2024-04-27 04:50:03.744298] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:34.036 [2024-04-27 04:50:03.744439] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:34.036 [2024-04-27 04:50:03.744500] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:34.036 [2024-04-27 04:50:03.744595] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:34.036 [2024-04-27 04:50:03.744637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:34.036 [2024-04-27 04:50:03.744686] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:34.036 [2024-04-27 04:50:03.744729] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:34.036 [2024-04-27 04:50:03.744833] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:34.036 [2024-04-27 04:50:03.744919] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:34.036 passed 00:07:34.036 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:07:34.036 Test: test_reconnect_ctrlr ...[2024-04-27 04:50:03.746229] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.746441] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.746761] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.746917] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.747082] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 passed 00:07:34.036 Test: test_retry_failover_ctrlr ...[2024-04-27 04:50:03.747470] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 passed 00:07:34.036 Test: test_fail_path ...[2024-04-27 04:50:03.748092] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.748294] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.748484] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.748669] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.748828] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 passed 00:07:34.036 Test: test_nvme_ns_cmp ...passed 00:07:34.036 Test: test_ana_transition ...passed 00:07:34.036 Test: test_set_preferred_path ...passed 00:07:34.036 Test: test_find_next_io_path ...passed 00:07:34.036 Test: test_find_io_path_min_qd ...passed 00:07:34.036 Test: test_disable_auto_failback ...[2024-04-27 04:50:03.750780] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 passed 00:07:34.036 Test: test_set_multipath_policy ...passed 00:07:34.036 Test: test_uuid_generation ...passed 00:07:34.036 Test: test_retry_io_to_same_path ...passed 00:07:34.036 Test: test_race_between_reset_and_disconnected ...passed 00:07:34.036 Test: test_ctrlr_op_rpc ...passed 00:07:34.036 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:34.036 Test: test_disable_enable_ctrlr ...[2024-04-27 04:50:03.754936] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 [2024-04-27 04:50:03.755117] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:34.036 passed 00:07:34.036 Test: test_delete_ctrlr_done ...passed 00:07:34.036 Test: test_ns_remove_during_reset ...passed 00:07:34.036 00:07:34.036 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.036 suites 1 1 n/a 0 0 00:07:34.036 tests 48 48 48 0 0 00:07:34.036 asserts 3553 3553 3553 0 n/a 00:07:34.036 00:07:34.036 Elapsed time = 0.038 seconds 00:07:34.036 04:50:03 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:34.036 Test Options 00:07:34.036 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:07:34.036 00:07:34.036 00:07:34.036 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.036 http://cunit.sourceforge.net/ 00:07:34.036 00:07:34.036 00:07:34.036 Suite: raid 00:07:34.036 Test: test_create_raid ...passed 00:07:34.036 Test: test_create_raid_superblock ...passed 00:07:34.036 Test: test_delete_raid ...passed 00:07:34.036 Test: test_create_raid_invalid_args ...[2024-04-27 04:50:03.801513] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:34.036 [2024-04-27 04:50:03.802087] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:34.036 [2024-04-27 04:50:03.802764] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:34.036 [2024-04-27 04:50:03.803104] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:34.036 [2024-04-27 04:50:03.804020] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:34.036 passed 00:07:34.036 Test: test_delete_raid_invalid_args ...passed 00:07:34.036 Test: test_io_channel ...passed 00:07:34.036 Test: test_reset_io ...passed 00:07:34.036 Test: test_write_io ...passed 00:07:34.036 Test: test_read_io ...passed 00:07:34.974 Test: test_unmap_io ...passed 00:07:34.974 Test: test_io_failure ...[2024-04-27 04:50:04.739710] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:07:34.974 passed 00:07:34.974 Test: test_multi_raid_no_io ...passed 00:07:34.974 Test: test_multi_raid_with_io ...passed 00:07:34.974 Test: test_io_type_supported ...passed 00:07:34.974 Test: test_raid_json_dump_info ...passed 00:07:34.974 Test: test_context_size ...passed 00:07:34.974 Test: test_raid_level_conversions ...passed 00:07:34.974 Test: test_raid_process ...passed 00:07:34.974 Test: test_raid_io_split ...passed 00:07:34.974 00:07:34.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.974 suites 1 1 n/a 0 0 00:07:34.974 tests 19 19 19 0 0 00:07:34.974 asserts 177879 177879 177879 0 n/a 00:07:34.974 00:07:34.974 Elapsed time = 0.952 seconds 00:07:34.974 04:50:04 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:34.974 00:07:34.974 00:07:34.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.974 http://cunit.sourceforge.net/ 00:07:34.974 00:07:34.974 00:07:34.974 Suite: raid_sb 00:07:34.974 Test: test_raid_bdev_write_superblock ...passed 00:07:34.974 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:34.974 Test: test_raid_bdev_parse_superblock ...[2024-04-27 04:50:04.797678] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:34.974 passed 00:07:34.974 00:07:34.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.974 suites 1 1 n/a 0 0 00:07:34.974 tests 3 3 3 0 0 00:07:34.974 asserts 32 32 32 0 n/a 00:07:34.974 00:07:34.974 Elapsed time = 0.001 seconds 00:07:34.974 04:50:04 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:34.974 00:07:34.974 00:07:34.974 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.974 http://cunit.sourceforge.net/ 00:07:34.974 00:07:34.974 00:07:34.974 Suite: concat 00:07:34.974 Test: test_concat_start ...passed 00:07:34.974 Test: test_concat_rw ...passed 00:07:34.974 Test: test_concat_null_payload ...passed 00:07:34.974 00:07:34.974 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.974 suites 1 1 n/a 0 0 00:07:34.974 tests 3 3 3 0 0 00:07:34.974 asserts 8097 8097 8097 0 n/a 00:07:34.974 00:07:34.974 Elapsed time = 0.006 seconds 00:07:34.974 04:50:04 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:35.257 00:07:35.257 00:07:35.257 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.257 http://cunit.sourceforge.net/ 00:07:35.257 00:07:35.257 00:07:35.257 Suite: raid1 00:07:35.257 Test: test_raid1_start ...passed 00:07:35.257 Test: test_raid1_read_balancing ...passed 00:07:35.257 00:07:35.257 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.257 suites 1 1 n/a 0 0 00:07:35.257 tests 2 2 2 0 0 00:07:35.257 asserts 2856 2856 2856 0 n/a 00:07:35.257 00:07:35.257 Elapsed time = 0.004 seconds 00:07:35.257 04:50:04 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:35.257 00:07:35.257 00:07:35.257 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.257 http://cunit.sourceforge.net/ 00:07:35.257 00:07:35.257 00:07:35.257 Suite: zone 00:07:35.257 Test: test_zone_get_operation ...passed 00:07:35.257 Test: test_bdev_zone_get_info ...passed 00:07:35.257 Test: test_bdev_zone_management ...passed 00:07:35.257 Test: test_bdev_zone_append ...passed 00:07:35.257 Test: test_bdev_zone_append_with_md ...passed 00:07:35.257 Test: test_bdev_zone_appendv ...passed 00:07:35.257 Test: test_bdev_zone_appendv_with_md ...passed 00:07:35.257 Test: test_bdev_io_get_append_location ...passed 00:07:35.257 00:07:35.257 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.257 suites 1 1 n/a 0 0 00:07:35.257 tests 8 8 8 0 0 00:07:35.257 asserts 94 94 94 0 n/a 00:07:35.257 00:07:35.257 Elapsed time = 0.000 seconds 00:07:35.257 04:50:04 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:35.257 00:07:35.257 00:07:35.257 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.257 http://cunit.sourceforge.net/ 00:07:35.257 00:07:35.257 00:07:35.257 Suite: gpt_parse 00:07:35.257 Test: test_parse_mbr_and_primary ...[2024-04-27 04:50:04.927155] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:35.257 [2024-04-27 04:50:04.927454] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:35.257 [2024-04-27 04:50:04.927524] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:35.257 [2024-04-27 04:50:04.927623] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:35.257 [2024-04-27 04:50:04.927677] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:35.257 [2024-04-27 04:50:04.927776] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:35.257 passed 00:07:35.257 Test: test_parse_secondary ...[2024-04-27 04:50:04.928570] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:35.257 [2024-04-27 04:50:04.928631] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:35.257 [2024-04-27 04:50:04.928681] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:35.257 [2024-04-27 04:50:04.928728] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:35.257 passed 00:07:35.257 Test: test_check_mbr ...[2024-04-27 04:50:04.929491] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:35.257 passed 00:07:35.257 Test: test_read_header ...[2024-04-27 04:50:04.929550] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:35.257 [2024-04-27 04:50:04.929623] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:35.257 [2024-04-27 04:50:04.929723] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:35.257 [2024-04-27 04:50:04.929807] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:35.257 [2024-04-27 04:50:04.929853] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:35.257 [2024-04-27 04:50:04.929900] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:35.257 [2024-04-27 04:50:04.929942] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:35.257 passed 00:07:35.257 Test: test_read_partitions ...[2024-04-27 04:50:04.930004] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:35.257 [2024-04-27 04:50:04.930060] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:35.257 [2024-04-27 04:50:04.930100] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:35.257 [2024-04-27 04:50:04.930134] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:35.257 [2024-04-27 04:50:04.930524] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:35.257 passed 00:07:35.257 00:07:35.257 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.257 suites 1 1 n/a 0 0 00:07:35.257 tests 5 5 5 0 0 00:07:35.257 asserts 33 33 33 0 n/a 00:07:35.257 00:07:35.257 Elapsed time = 0.004 seconds 00:07:35.257 04:50:04 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:35.257 00:07:35.257 00:07:35.258 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.258 http://cunit.sourceforge.net/ 00:07:35.258 00:07:35.258 00:07:35.258 Suite: bdev_part 00:07:35.258 Test: part_test ...[2024-04-27 04:50:04.964379] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:07:35.258 passed 00:07:35.258 Test: part_free_test ...passed 00:07:35.258 Test: part_get_io_channel_test ...passed 00:07:35.258 Test: part_construct_ext ...passed 00:07:35.258 00:07:35.258 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.258 suites 1 1 n/a 0 0 00:07:35.258 tests 4 4 4 0 0 00:07:35.258 asserts 48 48 48 0 n/a 00:07:35.258 00:07:35.258 Elapsed time = 0.060 seconds 00:07:35.258 04:50:05 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:35.258 00:07:35.258 00:07:35.258 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.258 http://cunit.sourceforge.net/ 00:07:35.258 00:07:35.258 00:07:35.258 Suite: scsi_nvme_suite 00:07:35.258 Test: scsi_nvme_translate_test ...passed 00:07:35.258 00:07:35.258 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.258 suites 1 1 n/a 0 0 00:07:35.258 tests 1 1 1 0 0 00:07:35.258 asserts 104 104 104 0 n/a 00:07:35.258 00:07:35.258 Elapsed time = 0.000 seconds 00:07:35.258 04:50:05 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:35.258 00:07:35.258 00:07:35.258 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.258 http://cunit.sourceforge.net/ 00:07:35.258 00:07:35.258 00:07:35.258 Suite: lvol 00:07:35.258 Test: ut_lvs_init ...[2024-04-27 04:50:05.094636] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:35.258 [2024-04-27 04:50:05.095117] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:35.258 passed 00:07:35.258 Test: ut_lvol_init ...passed 00:07:35.258 Test: ut_lvol_snapshot ...passed 00:07:35.258 Test: ut_lvol_clone ...passed 00:07:35.258 Test: ut_lvs_destroy ...passed 00:07:35.258 Test: ut_lvs_unload ...passed 00:07:35.258 Test: ut_lvol_resize ...[2024-04-27 04:50:05.096762] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:35.258 passed 00:07:35.258 Test: ut_lvol_set_read_only ...passed 00:07:35.258 Test: ut_lvol_hotremove ...passed 00:07:35.258 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:35.258 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:35.258 Test: ut_lvol_read_write ...passed 00:07:35.258 Test: ut_vbdev_lvol_submit_request ...passed 00:07:35.258 Test: ut_lvol_examine_config ...passed 00:07:35.258 Test: ut_lvol_examine_disk ...[2024-04-27 04:50:05.097523] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:35.258 passed 00:07:35.258 Test: ut_lvol_rename ...[2024-04-27 04:50:05.098576] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:35.258 [2024-04-27 04:50:05.098711] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:35.258 passed 00:07:35.258 Test: ut_bdev_finish ...passed 00:07:35.258 Test: ut_lvs_rename ...passed 00:07:35.258 Test: ut_lvol_seek ...passed 00:07:35.258 Test: ut_esnap_dev_create ...[2024-04-27 04:50:05.099546] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:35.258 [2024-04-27 04:50:05.099640] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:35.258 [2024-04-27 04:50:05.099676] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:35.258 [2024-04-27 04:50:05.099762] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:07:35.258 passed 00:07:35.258 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-27 04:50:05.099936] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:35.258 [2024-04-27 04:50:05.099980] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:35.258 passed 00:07:35.258 00:07:35.258 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.258 suites 1 1 n/a 0 0 00:07:35.258 tests 21 21 21 0 0 00:07:35.258 asserts 712 712 712 0 n/a 00:07:35.258 00:07:35.258 Elapsed time = 0.006 seconds 00:07:35.258 04:50:05 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:35.524 00:07:35.524 00:07:35.524 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.524 http://cunit.sourceforge.net/ 00:07:35.524 00:07:35.524 00:07:35.524 Suite: zone_block 00:07:35.524 Test: test_zone_block_create ...passed 00:07:35.524 Test: test_zone_block_create_invalid ...[2024-04-27 04:50:05.161650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:35.524 [2024-04-27 04:50:05.162067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-27 04:50:05.162300] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:35.525 [2024-04-27 04:50:05.162405] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-27 04:50:05.162637] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:35.525 [2024-04-27 04:50:05.162687] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-27 04:50:05.162806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:35.525 [2024-04-27 04:50:05.162869] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:07:35.525 Test: test_get_zone_info ...[2024-04-27 04:50:05.163521] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.163633] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.163734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_supported_io_types ...passed 00:07:35.525 Test: test_reset_zone ...[2024-04-27 04:50:05.164681] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.164763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_open_zone ...[2024-04-27 04:50:05.165280] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.166032] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.166119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_zone_write ...[2024-04-27 04:50:05.166663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:35.525 [2024-04-27 04:50:05.166748] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.166807] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:35.525 [2024-04-27 04:50:05.166868] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.173793] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:35.525 [2024-04-27 04:50:05.173859] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.173954] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:35.525 [2024-04-27 04:50:05.174001] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.181020] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:35.525 [2024-04-27 04:50:05.181103] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_zone_read ...[2024-04-27 04:50:05.181655] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:35.525 [2024-04-27 04:50:05.181712] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.181806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:35.525 [2024-04-27 04:50:05.181855] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.182370] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:35.525 [2024-04-27 04:50:05.182429] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_close_zone ...[2024-04-27 04:50:05.182903] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.182993] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.183245] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.183324] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_finish_zone ...[2024-04-27 04:50:05.183988] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.184071] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 Test: test_append_zone ...[2024-04-27 04:50:05.184436] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:35.525 [2024-04-27 04:50:05.184496] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.184569] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:35.525 [2024-04-27 04:50:05.184621] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 [2024-04-27 04:50:05.197246] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:35.525 [2024-04-27 04:50:05.197315] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:35.525 passed 00:07:35.525 00:07:35.525 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.525 suites 1 1 n/a 0 0 00:07:35.525 tests 11 11 11 0 0 00:07:35.525 asserts 3437 3437 3437 0 n/a 00:07:35.525 00:07:35.525 Elapsed time = 0.037 seconds 00:07:35.525 04:50:05 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:35.525 00:07:35.525 00:07:35.525 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.525 http://cunit.sourceforge.net/ 00:07:35.525 00:07:35.525 00:07:35.525 Suite: bdev 00:07:35.525 Test: basic ...[2024-04-27 04:50:05.301777] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55cfb043e901): Operation not permitted (rc=-1) 00:07:35.525 [2024-04-27 04:50:05.302198] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55cfb043e8c0): Operation not permitted (rc=-1) 00:07:35.525 [2024-04-27 04:50:05.302260] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55cfb043e901): Operation not permitted (rc=-1) 00:07:35.525 passed 00:07:35.525 Test: unregister_and_close ...passed 00:07:35.783 Test: unregister_and_close_different_threads ...passed 00:07:35.783 Test: basic_qos ...passed 00:07:35.783 Test: put_channel_during_reset ...passed 00:07:35.783 Test: aborted_reset ...passed 00:07:35.783 Test: aborted_reset_no_outstanding_io ...passed 00:07:36.043 Test: io_during_reset ...passed 00:07:36.043 Test: reset_completions ...passed 00:07:36.043 Test: io_during_qos_queue ...passed 00:07:36.043 Test: io_during_qos_reset ...passed 00:07:36.043 Test: enomem ...passed 00:07:36.301 Test: enomem_multi_bdev ...passed 00:07:36.301 Test: enomem_multi_bdev_unregister ...passed 00:07:36.301 Test: enomem_multi_io_target ...passed 00:07:36.301 Test: qos_dynamic_enable ...passed 00:07:36.301 Test: bdev_histograms_mt ...passed 00:07:36.301 Test: bdev_set_io_timeout_mt ...[2024-04-27 04:50:06.181435] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:36.301 passed 00:07:36.559 Test: lock_lba_range_then_submit_io ...[2024-04-27 04:50:06.202538] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55cfb043e880 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:36.559 passed 00:07:36.559 Test: unregister_during_reset ...passed 00:07:36.559 Test: event_notify_and_close ...passed 00:07:36.559 Suite: bdev_wrong_thread 00:07:36.560 Test: spdk_bdev_register_wt ...[2024-04-27 04:50:06.316939] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8359:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:07:36.560 passed 00:07:36.560 Test: spdk_bdev_examine_wt ...[2024-04-27 04:50:06.317376] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:07:36.560 passed 00:07:36.560 00:07:36.560 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.560 suites 2 2 n/a 0 0 00:07:36.560 tests 23 23 23 0 0 00:07:36.560 asserts 601 601 601 0 n/a 00:07:36.560 00:07:36.560 Elapsed time = 1.047 seconds 00:07:36.560 00:07:36.560 real 0m4.171s 00:07:36.560 user 0m1.965s 00:07:36.560 sys 0m2.211s 00:07:36.560 04:50:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.560 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.560 ************************************ 00:07:36.560 END TEST unittest_bdev 00:07:36.560 ************************************ 00:07:36.560 04:50:06 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.560 04:50:06 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.560 04:50:06 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.560 04:50:06 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:36.560 04:50:06 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:36.560 04:50:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.560 04:50:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.560 04:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.560 ************************************ 00:07:36.560 START TEST unittest_bdev_raid5f 00:07:36.560 ************************************ 00:07:36.560 04:50:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:36.560 00:07:36.560 00:07:36.560 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.560 http://cunit.sourceforge.net/ 00:07:36.560 00:07:36.560 00:07:36.560 Suite: raid5f 00:07:36.560 Test: test_raid5f_start ...passed 00:07:37.494 Test: test_raid5f_submit_read_request ...passed 00:07:37.495 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:40.781 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:58.920 Test: test_raid5f_chunk_write_error ...passed 00:08:07.035 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:08:10.320 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:42.413 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:42.413 00:08:42.413 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.414 suites 1 1 n/a 0 0 00:08:42.414 tests 8 8 8 0 0 00:08:42.414 asserts 351864 351864 351864 0 n/a 00:08:42.414 00:08:42.414 Elapsed time = 61.987 seconds 00:08:42.414 00:08:42.414 real 1m2.085s 00:08:42.414 user 0m58.620s 00:08:42.414 sys 0m3.460s 00:08:42.414 04:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.414 04:51:08 -- common/autotest_common.sh@10 -- # set +x 00:08:42.414 ************************************ 00:08:42.414 END TEST unittest_bdev_raid5f 00:08:42.414 ************************************ 00:08:42.414 04:51:08 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:08:42.414 04:51:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.414 04:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.414 04:51:08 -- common/autotest_common.sh@10 -- # set +x 00:08:42.414 ************************************ 00:08:42.414 START TEST unittest_blob_blobfs 00:08:42.414 ************************************ 00:08:42.414 04:51:08 -- common/autotest_common.sh@1104 -- # unittest_blob 00:08:42.414 04:51:08 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:42.414 04:51:08 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:42.414 00:08:42.414 00:08:42.414 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.414 http://cunit.sourceforge.net/ 00:08:42.414 00:08:42.414 00:08:42.414 Suite: blob_nocopy_noextent 00:08:42.414 Test: blob_init ...[2024-04-27 04:51:08.590905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:42.414 passed 00:08:42.414 Test: blob_thin_provision ...passed 00:08:42.414 Test: blob_read_only ...passed 00:08:42.414 Test: bs_load ...[2024-04-27 04:51:08.720768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:42.414 passed 00:08:42.414 Test: bs_load_custom_cluster_size ...passed 00:08:42.414 Test: bs_load_after_failed_grow ...passed 00:08:42.414 Test: bs_cluster_sz ...[2024-04-27 04:51:08.770741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:42.414 [2024-04-27 04:51:08.771236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:42.414 [2024-04-27 04:51:08.771424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:42.414 passed 00:08:42.414 Test: bs_resize_md ...passed 00:08:42.414 Test: bs_destroy ...passed 00:08:42.414 Test: bs_type ...passed 00:08:42.414 Test: bs_super_block ...passed 00:08:42.414 Test: bs_test_recover_cluster_count ...passed 00:08:42.414 Test: bs_grow_live ...passed 00:08:42.414 Test: bs_grow_live_no_space ...passed 00:08:42.414 Test: bs_test_grow ...passed 00:08:42.414 Test: blob_serialize_test ...passed 00:08:42.414 Test: super_block_crc ...passed 00:08:42.414 Test: blob_thin_prov_write_count_io ...passed 00:08:42.414 Test: bs_load_iter_test ...passed 00:08:42.414 Test: blob_relations ...[2024-04-27 04:51:09.040602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.040813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.042437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.042530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 passed 00:08:42.414 Test: blob_relations2 ...[2024-04-27 04:51:09.069069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.069246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.069341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.069412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.071847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.071955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.072738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:42.414 [2024-04-27 04:51:09.072856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 passed 00:08:42.414 Test: blob_relations3 ...passed 00:08:42.414 Test: blobstore_clean_power_failure ...passed 00:08:42.414 Test: blob_delete_snapshot_power_failure ...[2024-04-27 04:51:09.356357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:42.414 [2024-04-27 04:51:09.376499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:42.414 [2024-04-27 04:51:09.376645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:42.414 [2024-04-27 04:51:09.376703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.396769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:42.414 [2024-04-27 04:51:09.396890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:42.414 [2024-04-27 04:51:09.396952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:42.414 [2024-04-27 04:51:09.396996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.417470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:42.414 [2024-04-27 04:51:09.417649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.438442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:42.414 [2024-04-27 04:51:09.438627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 [2024-04-27 04:51:09.459795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:42.414 [2024-04-27 04:51:09.459952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:42.414 passed 00:08:42.414 Test: blob_create_snapshot_power_failure ...[2024-04-27 04:51:09.522366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:42.414 [2024-04-27 04:51:09.563010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:42.414 [2024-04-27 04:51:09.583116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:42.414 passed 00:08:42.414 Test: blob_io_unit ...passed 00:08:42.414 Test: blob_io_unit_compatibility ...passed 00:08:42.414 Test: blob_ext_md_pages ...passed 00:08:42.414 Test: blob_esnap_io_4096_4096 ...passed 00:08:42.414 Test: blob_esnap_io_512_512 ...passed 00:08:42.414 Test: blob_esnap_io_4096_512 ...passed 00:08:42.414 Test: blob_esnap_io_512_4096 ...passed 00:08:42.414 Suite: blob_bs_nocopy_noextent 00:08:42.414 Test: blob_open ...passed 00:08:42.414 Test: blob_create ...[2024-04-27 04:51:10.012480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:42.414 passed 00:08:42.414 Test: blob_create_loop ...passed 00:08:42.414 Test: blob_create_fail ...[2024-04-27 04:51:10.185719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:42.414 passed 00:08:42.414 Test: blob_create_internal ...passed 00:08:42.414 Test: blob_create_zero_extent ...passed 00:08:42.414 Test: blob_snapshot ...passed 00:08:42.414 Test: blob_clone ...passed 00:08:42.414 Test: blob_inflate ...[2024-04-27 04:51:10.540196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:42.414 passed 00:08:42.414 Test: blob_delete ...passed 00:08:42.414 Test: blob_resize_test ...[2024-04-27 04:51:10.683248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:42.414 passed 00:08:42.414 Test: channel_ops ...passed 00:08:42.414 Test: blob_super ...passed 00:08:42.414 Test: blob_rw_verify_iov ...passed 00:08:42.414 Test: blob_unmap ...passed 00:08:42.414 Test: blob_iter ...passed 00:08:42.414 Test: blob_parse_md ...passed 00:08:42.414 Test: bs_load_pending_removal ...passed 00:08:42.414 Test: bs_unload ...[2024-04-27 04:51:11.239469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:42.414 passed 00:08:42.414 Test: bs_usable_clusters ...passed 00:08:42.414 Test: blob_crc ...[2024-04-27 04:51:11.366935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:42.414 [2024-04-27 04:51:11.367147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:42.414 passed 00:08:42.414 Test: blob_flags ...passed 00:08:42.414 Test: bs_version ...passed 00:08:42.414 Test: blob_set_xattrs_test ...[2024-04-27 04:51:11.569379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:42.414 [2024-04-27 04:51:11.569542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:42.414 passed 00:08:42.414 Test: blob_thin_prov_alloc ...passed 00:08:42.414 Test: blob_insert_cluster_msg_test ...passed 00:08:42.414 Test: blob_thin_prov_rw ...passed 00:08:42.414 Test: blob_thin_prov_rle ...passed 00:08:42.414 Test: blob_thin_prov_rw_iov ...passed 00:08:42.414 Test: blob_snapshot_rw ...passed 00:08:42.414 Test: blob_snapshot_rw_iov ...passed 00:08:42.674 Test: blob_inflate_rw ...passed 00:08:42.674 Test: blob_snapshot_freeze_io ...passed 00:08:42.933 Test: blob_operation_split_rw ...passed 00:08:43.190 Test: blob_operation_split_rw_iov ...passed 00:08:43.190 Test: blob_simultaneous_operations ...[2024-04-27 04:51:12.972946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.190 [2024-04-27 04:51:12.973102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.190 [2024-04-27 04:51:12.974543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.190 [2024-04-27 04:51:12.974607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.190 [2024-04-27 04:51:12.989990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.190 [2024-04-27 04:51:12.990132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.190 [2024-04-27 04:51:12.990319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.190 [2024-04-27 04:51:12.990372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.190 passed 00:08:43.448 Test: blob_persist_test ...passed 00:08:43.448 Test: blob_decouple_snapshot ...passed 00:08:43.448 Test: blob_seek_io_unit ...passed 00:08:43.448 Test: blob_nested_freezes ...passed 00:08:43.448 Suite: blob_blob_nocopy_noextent 00:08:43.707 Test: blob_write ...passed 00:08:43.707 Test: blob_read ...passed 00:08:43.707 Test: blob_rw_verify ...passed 00:08:43.965 Test: blob_rw_verify_iov_nomem ...passed 00:08:43.966 Test: blob_rw_iov_read_only ...passed 00:08:43.966 Test: blob_xattr ...passed 00:08:43.966 Test: blob_dirty_shutdown ...passed 00:08:44.224 Test: blob_is_degraded ...passed 00:08:44.224 Suite: blob_esnap_bs_nocopy_noextent 00:08:44.224 Test: blob_esnap_create ...passed 00:08:44.224 Test: blob_esnap_thread_add_remove ...passed 00:08:44.224 Test: blob_esnap_clone_snapshot ...passed 00:08:44.483 Test: blob_esnap_clone_inflate ...passed 00:08:44.483 Test: blob_esnap_clone_decouple ...passed 00:08:44.483 Test: blob_esnap_clone_reload ...passed 00:08:44.483 Test: blob_esnap_hotplug ...passed 00:08:44.483 Suite: blob_nocopy_extent 00:08:44.483 Test: blob_init ...[2024-04-27 04:51:14.358360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:44.767 passed 00:08:44.767 Test: blob_thin_provision ...passed 00:08:44.767 Test: blob_read_only ...passed 00:08:44.767 Test: bs_load ...[2024-04-27 04:51:14.451106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:44.767 passed 00:08:44.767 Test: bs_load_custom_cluster_size ...passed 00:08:44.767 Test: bs_load_after_failed_grow ...passed 00:08:44.767 Test: bs_cluster_sz ...[2024-04-27 04:51:14.502969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:44.767 [2024-04-27 04:51:14.503325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:44.767 [2024-04-27 04:51:14.503401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:44.767 passed 00:08:44.767 Test: bs_resize_md ...passed 00:08:44.767 Test: bs_destroy ...passed 00:08:44.767 Test: bs_type ...passed 00:08:44.767 Test: bs_super_block ...passed 00:08:44.767 Test: bs_test_recover_cluster_count ...passed 00:08:44.767 Test: bs_grow_live ...passed 00:08:44.768 Test: bs_grow_live_no_space ...passed 00:08:44.768 Test: bs_test_grow ...passed 00:08:45.026 Test: blob_serialize_test ...passed 00:08:45.026 Test: super_block_crc ...passed 00:08:45.026 Test: blob_thin_prov_write_count_io ...passed 00:08:45.026 Test: bs_load_iter_test ...passed 00:08:45.026 Test: blob_relations ...[2024-04-27 04:51:14.769314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.026 [2024-04-27 04:51:14.769477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.026 [2024-04-27 04:51:14.770611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.026 [2024-04-27 04:51:14.770738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.026 passed 00:08:45.026 Test: blob_relations2 ...[2024-04-27 04:51:14.794256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.026 [2024-04-27 04:51:14.794400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.026 [2024-04-27 04:51:14.794458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.026 [2024-04-27 04:51:14.794495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.027 [2024-04-27 04:51:14.796168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.027 [2024-04-27 04:51:14.796250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.027 [2024-04-27 04:51:14.796773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:45.027 [2024-04-27 04:51:14.796839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.027 passed 00:08:45.027 Test: blob_relations3 ...passed 00:08:45.286 Test: blobstore_clean_power_failure ...passed 00:08:45.286 Test: blob_delete_snapshot_power_failure ...[2024-04-27 04:51:15.074585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:45.286 [2024-04-27 04:51:15.097653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:45.286 [2024-04-27 04:51:15.120764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:45.286 [2024-04-27 04:51:15.120923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:45.286 [2024-04-27 04:51:15.120982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.286 [2024-04-27 04:51:15.143547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:45.286 [2024-04-27 04:51:15.143685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:45.286 [2024-04-27 04:51:15.143735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:45.286 [2024-04-27 04:51:15.143773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.286 [2024-04-27 04:51:15.166413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:45.286 [2024-04-27 04:51:15.166553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:45.286 [2024-04-27 04:51:15.166640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:45.286 [2024-04-27 04:51:15.166702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.544 [2024-04-27 04:51:15.188706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:45.544 [2024-04-27 04:51:15.188906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.544 [2024-04-27 04:51:15.209822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:45.544 [2024-04-27 04:51:15.210060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.544 [2024-04-27 04:51:15.231994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:45.545 [2024-04-27 04:51:15.232149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:45.545 passed 00:08:45.545 Test: blob_create_snapshot_power_failure ...[2024-04-27 04:51:15.295842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:45.545 [2024-04-27 04:51:15.317104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:45.545 [2024-04-27 04:51:15.358396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:45.545 [2024-04-27 04:51:15.379737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:45.803 passed 00:08:45.803 Test: blob_io_unit ...passed 00:08:45.803 Test: blob_io_unit_compatibility ...passed 00:08:45.803 Test: blob_ext_md_pages ...passed 00:08:45.803 Test: blob_esnap_io_4096_4096 ...passed 00:08:45.803 Test: blob_esnap_io_512_512 ...passed 00:08:45.803 Test: blob_esnap_io_4096_512 ...passed 00:08:45.803 Test: blob_esnap_io_512_4096 ...passed 00:08:45.803 Suite: blob_bs_nocopy_extent 00:08:46.062 Test: blob_open ...passed 00:08:46.062 Test: blob_create ...[2024-04-27 04:51:15.775286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:46.062 passed 00:08:46.062 Test: blob_create_loop ...passed 00:08:46.062 Test: blob_create_fail ...[2024-04-27 04:51:15.939816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:46.321 passed 00:08:46.321 Test: blob_create_internal ...passed 00:08:46.321 Test: blob_create_zero_extent ...passed 00:08:46.321 Test: blob_snapshot ...passed 00:08:46.580 Test: blob_clone ...passed 00:08:46.580 Test: blob_inflate ...[2024-04-27 04:51:16.289114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:46.580 passed 00:08:46.580 Test: blob_delete ...passed 00:08:46.580 Test: blob_resize_test ...[2024-04-27 04:51:16.418890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:46.580 passed 00:08:46.841 Test: channel_ops ...passed 00:08:46.841 Test: blob_super ...passed 00:08:46.841 Test: blob_rw_verify_iov ...passed 00:08:46.841 Test: blob_unmap ...passed 00:08:47.100 Test: blob_iter ...passed 00:08:47.100 Test: blob_parse_md ...passed 00:08:47.100 Test: bs_load_pending_removal ...passed 00:08:47.100 Test: bs_unload ...[2024-04-27 04:51:16.926593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:47.100 passed 00:08:47.359 Test: bs_usable_clusters ...passed 00:08:47.359 Test: blob_crc ...[2024-04-27 04:51:17.052812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:47.359 [2024-04-27 04:51:17.053009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:47.359 passed 00:08:47.359 Test: blob_flags ...passed 00:08:47.359 Test: bs_version ...passed 00:08:47.359 Test: blob_set_xattrs_test ...[2024-04-27 04:51:17.237210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:47.359 [2024-04-27 04:51:17.237399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:47.618 passed 00:08:47.618 Test: blob_thin_prov_alloc ...passed 00:08:47.618 Test: blob_insert_cluster_msg_test ...passed 00:08:47.877 Test: blob_thin_prov_rw ...passed 00:08:47.877 Test: blob_thin_prov_rle ...passed 00:08:47.877 Test: blob_thin_prov_rw_iov ...passed 00:08:47.877 Test: blob_snapshot_rw ...passed 00:08:48.136 Test: blob_snapshot_rw_iov ...passed 00:08:48.395 Test: blob_inflate_rw ...passed 00:08:48.395 Test: blob_snapshot_freeze_io ...passed 00:08:48.654 Test: blob_operation_split_rw ...passed 00:08:48.913 Test: blob_operation_split_rw_iov ...passed 00:08:48.913 Test: blob_simultaneous_operations ...[2024-04-27 04:51:18.606134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:48.913 [2024-04-27 04:51:18.606272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:48.913 [2024-04-27 04:51:18.607634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:48.913 [2024-04-27 04:51:18.607691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:48.913 [2024-04-27 04:51:18.621278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:48.913 [2024-04-27 04:51:18.621412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:48.913 [2024-04-27 04:51:18.621570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:48.914 [2024-04-27 04:51:18.621600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:48.914 passed 00:08:48.914 Test: blob_persist_test ...passed 00:08:49.173 Test: blob_decouple_snapshot ...passed 00:08:49.173 Test: blob_seek_io_unit ...passed 00:08:49.173 Test: blob_nested_freezes ...passed 00:08:49.173 Suite: blob_blob_nocopy_extent 00:08:49.173 Test: blob_write ...passed 00:08:49.173 Test: blob_read ...passed 00:08:49.432 Test: blob_rw_verify ...passed 00:08:49.432 Test: blob_rw_verify_iov_nomem ...passed 00:08:49.432 Test: blob_rw_iov_read_only ...passed 00:08:49.432 Test: blob_xattr ...passed 00:08:49.691 Test: blob_dirty_shutdown ...passed 00:08:49.691 Test: blob_is_degraded ...passed 00:08:49.691 Suite: blob_esnap_bs_nocopy_extent 00:08:49.691 Test: blob_esnap_create ...passed 00:08:49.691 Test: blob_esnap_thread_add_remove ...passed 00:08:49.950 Test: blob_esnap_clone_snapshot ...passed 00:08:49.950 Test: blob_esnap_clone_inflate ...passed 00:08:49.950 Test: blob_esnap_clone_decouple ...passed 00:08:49.950 Test: blob_esnap_clone_reload ...passed 00:08:49.950 Test: blob_esnap_hotplug ...passed 00:08:49.950 Suite: blob_copy_noextent 00:08:49.950 Test: blob_init ...[2024-04-27 04:51:19.821534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:49.950 passed 00:08:50.209 Test: blob_thin_provision ...passed 00:08:50.209 Test: blob_read_only ...passed 00:08:50.209 Test: bs_load ...[2024-04-27 04:51:19.900156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:50.209 passed 00:08:50.209 Test: bs_load_custom_cluster_size ...passed 00:08:50.209 Test: bs_load_after_failed_grow ...passed 00:08:50.209 Test: bs_cluster_sz ...[2024-04-27 04:51:19.940643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:50.209 [2024-04-27 04:51:19.940890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:50.209 [2024-04-27 04:51:19.940954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:50.209 passed 00:08:50.209 Test: bs_resize_md ...passed 00:08:50.209 Test: bs_destroy ...passed 00:08:50.209 Test: bs_type ...passed 00:08:50.209 Test: bs_super_block ...passed 00:08:50.209 Test: bs_test_recover_cluster_count ...passed 00:08:50.209 Test: bs_grow_live ...passed 00:08:50.209 Test: bs_grow_live_no_space ...passed 00:08:50.209 Test: bs_test_grow ...passed 00:08:50.209 Test: blob_serialize_test ...passed 00:08:50.469 Test: super_block_crc ...passed 00:08:50.469 Test: blob_thin_prov_write_count_io ...passed 00:08:50.469 Test: bs_load_iter_test ...passed 00:08:50.469 Test: blob_relations ...[2024-04-27 04:51:20.191656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.191812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 [2024-04-27 04:51:20.192490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.192540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 passed 00:08:50.469 Test: blob_relations2 ...[2024-04-27 04:51:20.215358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.215499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 [2024-04-27 04:51:20.215535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.215556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 [2024-04-27 04:51:20.216628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.216701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 [2024-04-27 04:51:20.217028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:50.469 [2024-04-27 04:51:20.217086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.469 passed 00:08:50.469 Test: blob_relations3 ...passed 00:08:50.728 Test: blobstore_clean_power_failure ...passed 00:08:50.728 Test: blob_delete_snapshot_power_failure ...[2024-04-27 04:51:20.508897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:50.728 [2024-04-27 04:51:20.529555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:50.728 [2024-04-27 04:51:20.529703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:50.728 [2024-04-27 04:51:20.529767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.728 [2024-04-27 04:51:20.549955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:50.728 [2024-04-27 04:51:20.550082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:50.728 [2024-04-27 04:51:20.550129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:50.728 [2024-04-27 04:51:20.550157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.728 [2024-04-27 04:51:20.570561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:50.728 [2024-04-27 04:51:20.570750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.728 [2024-04-27 04:51:20.591584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:50.728 [2024-04-27 04:51:20.591780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.728 [2024-04-27 04:51:20.612940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:50.728 [2024-04-27 04:51:20.613112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:50.987 passed 00:08:50.987 Test: blob_create_snapshot_power_failure ...[2024-04-27 04:51:20.674069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:50.987 [2024-04-27 04:51:20.728392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:50.987 [2024-04-27 04:51:20.754900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:50.987 passed 00:08:50.987 Test: blob_io_unit ...passed 00:08:50.987 Test: blob_io_unit_compatibility ...passed 00:08:51.246 Test: blob_ext_md_pages ...passed 00:08:51.246 Test: blob_esnap_io_4096_4096 ...passed 00:08:51.246 Test: blob_esnap_io_512_512 ...passed 00:08:51.246 Test: blob_esnap_io_4096_512 ...passed 00:08:51.246 Test: blob_esnap_io_512_4096 ...passed 00:08:51.246 Suite: blob_bs_copy_noextent 00:08:51.505 Test: blob_open ...passed 00:08:51.505 Test: blob_create ...[2024-04-27 04:51:21.235172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:51.505 passed 00:08:51.505 Test: blob_create_loop ...passed 00:08:51.505 Test: blob_create_fail ...[2024-04-27 04:51:21.393179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:51.764 passed 00:08:51.764 Test: blob_create_internal ...passed 00:08:51.764 Test: blob_create_zero_extent ...passed 00:08:51.764 Test: blob_snapshot ...passed 00:08:52.022 Test: blob_clone ...passed 00:08:52.022 Test: blob_inflate ...[2024-04-27 04:51:21.752135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:52.022 passed 00:08:52.022 Test: blob_delete ...passed 00:08:52.022 Test: blob_resize_test ...[2024-04-27 04:51:21.894260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:52.022 passed 00:08:52.288 Test: channel_ops ...passed 00:08:52.288 Test: blob_super ...passed 00:08:52.288 Test: blob_rw_verify_iov ...passed 00:08:52.288 Test: blob_unmap ...passed 00:08:52.551 Test: blob_iter ...passed 00:08:52.551 Test: blob_parse_md ...passed 00:08:52.551 Test: bs_load_pending_removal ...passed 00:08:52.551 Test: bs_unload ...[2024-04-27 04:51:22.409612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:52.551 passed 00:08:52.809 Test: bs_usable_clusters ...passed 00:08:52.809 Test: blob_crc ...[2024-04-27 04:51:22.549730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:52.809 [2024-04-27 04:51:22.549913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:52.809 passed 00:08:52.809 Test: blob_flags ...passed 00:08:53.066 Test: bs_version ...passed 00:08:53.066 Test: blob_set_xattrs_test ...[2024-04-27 04:51:22.759345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:53.066 [2024-04-27 04:51:22.759503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:53.066 passed 00:08:53.324 Test: blob_thin_prov_alloc ...passed 00:08:53.324 Test: blob_insert_cluster_msg_test ...passed 00:08:53.324 Test: blob_thin_prov_rw ...passed 00:08:53.324 Test: blob_thin_prov_rle ...passed 00:08:53.583 Test: blob_thin_prov_rw_iov ...passed 00:08:53.583 Test: blob_snapshot_rw ...passed 00:08:53.583 Test: blob_snapshot_rw_iov ...passed 00:08:53.842 Test: blob_inflate_rw ...passed 00:08:53.842 Test: blob_snapshot_freeze_io ...passed 00:08:54.101 Test: blob_operation_split_rw ...passed 00:08:54.360 Test: blob_operation_split_rw_iov ...passed 00:08:54.360 Test: blob_simultaneous_operations ...[2024-04-27 04:51:24.141028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.360 [2024-04-27 04:51:24.141180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.360 [2024-04-27 04:51:24.142012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.360 [2024-04-27 04:51:24.142059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.360 [2024-04-27 04:51:24.146553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.360 [2024-04-27 04:51:24.146646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.360 [2024-04-27 04:51:24.146775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.360 [2024-04-27 04:51:24.146801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.360 passed 00:08:54.619 Test: blob_persist_test ...passed 00:08:54.619 Test: blob_decouple_snapshot ...passed 00:08:54.619 Test: blob_seek_io_unit ...passed 00:08:54.878 Test: blob_nested_freezes ...passed 00:08:54.878 Suite: blob_blob_copy_noextent 00:08:54.878 Test: blob_write ...passed 00:08:54.878 Test: blob_read ...passed 00:08:54.878 Test: blob_rw_verify ...passed 00:08:55.137 Test: blob_rw_verify_iov_nomem ...passed 00:08:55.137 Test: blob_rw_iov_read_only ...passed 00:08:55.137 Test: blob_xattr ...passed 00:08:55.137 Test: blob_dirty_shutdown ...passed 00:08:55.395 Test: blob_is_degraded ...passed 00:08:55.395 Suite: blob_esnap_bs_copy_noextent 00:08:55.395 Test: blob_esnap_create ...passed 00:08:55.395 Test: blob_esnap_thread_add_remove ...passed 00:08:55.395 Test: blob_esnap_clone_snapshot ...passed 00:08:55.654 Test: blob_esnap_clone_inflate ...passed 00:08:55.654 Test: blob_esnap_clone_decouple ...passed 00:08:55.654 Test: blob_esnap_clone_reload ...passed 00:08:55.654 Test: blob_esnap_hotplug ...passed 00:08:55.654 Suite: blob_copy_extent 00:08:55.654 Test: blob_init ...[2024-04-27 04:51:25.470761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:55.654 passed 00:08:55.654 Test: blob_thin_provision ...passed 00:08:55.654 Test: blob_read_only ...passed 00:08:55.654 Test: bs_load ...[2024-04-27 04:51:25.547735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:55.654 passed 00:08:55.913 Test: bs_load_custom_cluster_size ...passed 00:08:55.913 Test: bs_load_after_failed_grow ...passed 00:08:55.913 Test: bs_cluster_sz ...[2024-04-27 04:51:25.588438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:55.913 [2024-04-27 04:51:25.588715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:55.913 [2024-04-27 04:51:25.588766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:55.913 passed 00:08:55.913 Test: bs_resize_md ...passed 00:08:55.913 Test: bs_destroy ...passed 00:08:55.913 Test: bs_type ...passed 00:08:55.913 Test: bs_super_block ...passed 00:08:55.913 Test: bs_test_recover_cluster_count ...passed 00:08:55.913 Test: bs_grow_live ...passed 00:08:55.913 Test: bs_grow_live_no_space ...passed 00:08:55.913 Test: bs_test_grow ...passed 00:08:55.913 Test: blob_serialize_test ...passed 00:08:55.913 Test: super_block_crc ...passed 00:08:55.913 Test: blob_thin_prov_write_count_io ...passed 00:08:56.172 Test: bs_load_iter_test ...passed 00:08:56.172 Test: blob_relations ...[2024-04-27 04:51:25.838619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.838768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 [2024-04-27 04:51:25.839803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.839871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 passed 00:08:56.172 Test: blob_relations2 ...[2024-04-27 04:51:25.866982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.867142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 [2024-04-27 04:51:25.867199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.867232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 [2024-04-27 04:51:25.868774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.868842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 [2024-04-27 04:51:25.869303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:56.172 [2024-04-27 04:51:25.869368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.172 passed 00:08:56.172 Test: blob_relations3 ...passed 00:08:56.431 Test: blobstore_clean_power_failure ...passed 00:08:56.431 Test: blob_delete_snapshot_power_failure ...[2024-04-27 04:51:26.150071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:56.431 [2024-04-27 04:51:26.169933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:56.431 [2024-04-27 04:51:26.189879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:56.431 [2024-04-27 04:51:26.190064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:56.431 [2024-04-27 04:51:26.190106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.431 [2024-04-27 04:51:26.213605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:56.431 [2024-04-27 04:51:26.213741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:56.431 [2024-04-27 04:51:26.213773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:56.431 [2024-04-27 04:51:26.213803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.431 [2024-04-27 04:51:26.233189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:56.431 [2024-04-27 04:51:26.233319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:56.431 [2024-04-27 04:51:26.233349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:56.431 [2024-04-27 04:51:26.233378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.431 [2024-04-27 04:51:26.253037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:56.431 [2024-04-27 04:51:26.253196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.431 [2024-04-27 04:51:26.272680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:56.431 [2024-04-27 04:51:26.272833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.431 [2024-04-27 04:51:26.292674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:56.431 [2024-04-27 04:51:26.292783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.690 passed 00:08:56.690 Test: blob_create_snapshot_power_failure ...[2024-04-27 04:51:26.351001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:56.690 [2024-04-27 04:51:26.370087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:56.690 [2024-04-27 04:51:26.408425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:56.690 [2024-04-27 04:51:26.427948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:56.690 passed 00:08:56.690 Test: blob_io_unit ...passed 00:08:56.690 Test: blob_io_unit_compatibility ...passed 00:08:56.690 Test: blob_ext_md_pages ...passed 00:08:56.949 Test: blob_esnap_io_4096_4096 ...passed 00:08:56.949 Test: blob_esnap_io_512_512 ...passed 00:08:56.949 Test: blob_esnap_io_4096_512 ...passed 00:08:56.949 Test: blob_esnap_io_512_4096 ...passed 00:08:56.949 Suite: blob_bs_copy_extent 00:08:56.949 Test: blob_open ...passed 00:08:56.949 Test: blob_create ...[2024-04-27 04:51:26.808421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:56.949 passed 00:08:57.208 Test: blob_create_loop ...passed 00:08:57.208 Test: blob_create_fail ...[2024-04-27 04:51:26.953293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:57.208 passed 00:08:57.208 Test: blob_create_internal ...passed 00:08:57.208 Test: blob_create_zero_extent ...passed 00:08:57.466 Test: blob_snapshot ...passed 00:08:57.466 Test: blob_clone ...passed 00:08:57.466 Test: blob_inflate ...[2024-04-27 04:51:27.237376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:57.466 passed 00:08:57.466 Test: blob_delete ...passed 00:08:57.466 Test: blob_resize_test ...[2024-04-27 04:51:27.346793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:57.725 passed 00:08:57.725 Test: channel_ops ...passed 00:08:57.725 Test: blob_super ...passed 00:08:57.725 Test: blob_rw_verify_iov ...passed 00:08:57.725 Test: blob_unmap ...passed 00:08:57.983 Test: blob_iter ...passed 00:08:57.983 Test: blob_parse_md ...passed 00:08:57.983 Test: bs_load_pending_removal ...passed 00:08:57.983 Test: bs_unload ...[2024-04-27 04:51:27.789268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:57.983 passed 00:08:57.983 Test: bs_usable_clusters ...passed 00:08:58.241 Test: blob_crc ...[2024-04-27 04:51:27.907109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:58.241 [2024-04-27 04:51:27.907284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:58.241 passed 00:08:58.241 Test: blob_flags ...passed 00:08:58.241 Test: bs_version ...passed 00:08:58.241 Test: blob_set_xattrs_test ...[2024-04-27 04:51:28.080818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:58.241 [2024-04-27 04:51:28.080950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:58.241 passed 00:08:58.500 Test: blob_thin_prov_alloc ...passed 00:08:58.500 Test: blob_insert_cluster_msg_test ...passed 00:08:58.500 Test: blob_thin_prov_rw ...passed 00:08:58.757 Test: blob_thin_prov_rle ...passed 00:08:58.757 Test: blob_thin_prov_rw_iov ...passed 00:08:58.757 Test: blob_snapshot_rw ...passed 00:08:59.014 Test: blob_snapshot_rw_iov ...passed 00:08:59.272 Test: blob_inflate_rw ...passed 00:08:59.272 Test: blob_snapshot_freeze_io ...passed 00:08:59.530 Test: blob_operation_split_rw ...passed 00:08:59.788 Test: blob_operation_split_rw_iov ...passed 00:08:59.788 Test: blob_simultaneous_operations ...[2024-04-27 04:51:29.607937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:59.788 [2024-04-27 04:51:29.608110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.788 [2024-04-27 04:51:29.609299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:59.788 [2024-04-27 04:51:29.609355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.788 [2024-04-27 04:51:29.615367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:59.788 [2024-04-27 04:51:29.615471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.788 [2024-04-27 04:51:29.615646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:59.788 [2024-04-27 04:51:29.615681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.788 passed 00:09:00.046 Test: blob_persist_test ...passed 00:09:00.046 Test: blob_decouple_snapshot ...passed 00:09:00.315 Test: blob_seek_io_unit ...passed 00:09:00.315 Test: blob_nested_freezes ...passed 00:09:00.315 Suite: blob_blob_copy_extent 00:09:00.315 Test: blob_write ...passed 00:09:00.588 Test: blob_read ...passed 00:09:00.588 Test: blob_rw_verify ...passed 00:09:00.588 Test: blob_rw_verify_iov_nomem ...passed 00:09:00.588 Test: blob_rw_iov_read_only ...passed 00:09:00.588 Test: blob_xattr ...passed 00:09:00.847 Test: blob_dirty_shutdown ...passed 00:09:00.847 Test: blob_is_degraded ...passed 00:09:00.847 Suite: blob_esnap_bs_copy_extent 00:09:00.847 Test: blob_esnap_create ...passed 00:09:00.847 Test: blob_esnap_thread_add_remove ...passed 00:09:01.106 Test: blob_esnap_clone_snapshot ...passed 00:09:01.106 Test: blob_esnap_clone_inflate ...passed 00:09:01.106 Test: blob_esnap_clone_decouple ...passed 00:09:01.106 Test: blob_esnap_clone_reload ...passed 00:09:01.364 Test: blob_esnap_hotplug ...passed 00:09:01.364 00:09:01.364 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.364 suites 16 16 n/a 0 0 00:09:01.364 tests 348 348 348 0 0 00:09:01.364 asserts 92605 92605 92605 0 n/a 00:09:01.364 00:09:01.364 Elapsed time = 22.459 seconds 00:09:01.364 04:51:31 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:01.364 00:09:01.364 00:09:01.364 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.364 http://cunit.sourceforge.net/ 00:09:01.364 00:09:01.364 00:09:01.364 Suite: blob_bdev 00:09:01.364 Test: create_bs_dev ...passed 00:09:01.364 Test: create_bs_dev_ro ...[2024-04-27 04:51:31.162861] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:01.364 passed 00:09:01.364 Test: create_bs_dev_rw ...passed 00:09:01.364 Test: claim_bs_dev ...[2024-04-27 04:51:31.163632] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:01.364 passed 00:09:01.364 Test: claim_bs_dev_ro ...passed 00:09:01.364 Test: deferred_destroy_refs ...passed 00:09:01.364 Test: deferred_destroy_channels ...passed 00:09:01.364 Test: deferred_destroy_threads ...passed 00:09:01.364 00:09:01.364 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.364 suites 1 1 n/a 0 0 00:09:01.364 tests 8 8 8 0 0 00:09:01.364 asserts 119 119 119 0 n/a 00:09:01.364 00:09:01.364 Elapsed time = 0.002 seconds 00:09:01.364 04:51:31 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:01.364 00:09:01.364 00:09:01.364 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.364 http://cunit.sourceforge.net/ 00:09:01.364 00:09:01.364 00:09:01.364 Suite: tree 00:09:01.364 Test: blobfs_tree_op_test ...passed 00:09:01.364 00:09:01.364 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.364 suites 1 1 n/a 0 0 00:09:01.364 tests 1 1 1 0 0 00:09:01.364 asserts 27 27 27 0 n/a 00:09:01.364 00:09:01.364 Elapsed time = 0.000 seconds 00:09:01.364 04:51:31 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:01.364 00:09:01.364 00:09:01.364 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.364 http://cunit.sourceforge.net/ 00:09:01.364 00:09:01.364 00:09:01.364 Suite: blobfs_async_ut 00:09:01.623 Test: fs_init ...passed 00:09:01.623 Test: fs_open ...passed 00:09:01.623 Test: fs_create ...passed 00:09:01.623 Test: fs_truncate ...passed 00:09:01.623 Test: fs_rename ...[2024-04-27 04:51:31.423879] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:01.623 passed 00:09:01.623 Test: fs_rw_async ...passed 00:09:01.623 Test: fs_writev_readv_async ...passed 00:09:01.623 Test: tree_find_buffer_ut ...passed 00:09:01.623 Test: channel_ops ...passed 00:09:01.881 Test: channel_ops_sync ...passed 00:09:01.881 00:09:01.881 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.881 suites 1 1 n/a 0 0 00:09:01.881 tests 10 10 10 0 0 00:09:01.881 asserts 292 292 292 0 n/a 00:09:01.881 00:09:01.881 Elapsed time = 0.307 seconds 00:09:01.881 04:51:31 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:01.881 00:09:01.881 00:09:01.881 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.881 http://cunit.sourceforge.net/ 00:09:01.881 00:09:01.881 00:09:01.881 Suite: blobfs_sync_ut 00:09:01.881 Test: cache_read_after_write ...[2024-04-27 04:51:31.697335] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:01.881 passed 00:09:01.881 Test: file_length ...passed 00:09:02.139 Test: append_write_to_extend_blob ...passed 00:09:02.139 Test: partial_buffer ...passed 00:09:02.140 Test: cache_write_null_buffer ...passed 00:09:02.140 Test: fs_create_sync ...passed 00:09:02.140 Test: fs_rename_sync ...passed 00:09:02.140 Test: cache_append_no_cache ...passed 00:09:02.140 Test: fs_delete_file_without_close ...passed 00:09:02.140 00:09:02.140 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.140 suites 1 1 n/a 0 0 00:09:02.140 tests 9 9 9 0 0 00:09:02.140 asserts 345 345 345 0 n/a 00:09:02.140 00:09:02.140 Elapsed time = 0.808 seconds 00:09:02.398 04:51:32 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:02.398 00:09:02.398 00:09:02.398 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.398 http://cunit.sourceforge.net/ 00:09:02.398 00:09:02.398 00:09:02.398 Suite: blobfs_bdev_ut 00:09:02.398 Test: spdk_blobfs_bdev_detect_test ...[2024-04-27 04:51:32.101189] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:02.398 passed 00:09:02.398 Test: spdk_blobfs_bdev_create_test ...[2024-04-27 04:51:32.101740] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:02.398 passed 00:09:02.398 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:02.398 00:09:02.398 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.398 suites 1 1 n/a 0 0 00:09:02.398 tests 3 3 3 0 0 00:09:02.398 asserts 9 9 9 0 n/a 00:09:02.398 00:09:02.398 Elapsed time = 0.001 seconds 00:09:02.398 00:09:02.398 real 0m23.553s 00:09:02.398 user 0m23.121s 00:09:02.398 sys 0m0.852s 00:09:02.399 04:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.399 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.399 ************************************ 00:09:02.399 END TEST unittest_blob_blobfs 00:09:02.399 ************************************ 00:09:02.399 04:51:32 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:09:02.399 04:51:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:02.399 04:51:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.399 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.399 ************************************ 00:09:02.399 START TEST unittest_event 00:09:02.399 ************************************ 00:09:02.399 04:51:32 -- common/autotest_common.sh@1104 -- # unittest_event 00:09:02.399 04:51:32 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:02.399 00:09:02.399 00:09:02.399 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.399 http://cunit.sourceforge.net/ 00:09:02.399 00:09:02.399 00:09:02.399 Suite: app_suite 00:09:02.399 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:02.399 options: 00:09:02.399 -c, --config JSON config file (default none) 00:09:02.399 --json JSON config file (default none) 00:09:02.399 --json-ignore-init-errors 00:09:02.399 don't exit on invalid config entry 00:09:02.399 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:02.399 -g, --single-file-segments 00:09:02.399 force creating just one hugetlbfs file 00:09:02.399 -h, --help show this usage 00:09:02.399 -i, --shm-id shared memory ID (optional) 00:09:02.399 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:02.399 --lcores lcore to CPU mapping list. The list is in the format: 00:09:02.399 [<,lcores[@CPUs]>...] 00:09:02.399 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:02.399 Within the group, '-' is used for range separator, 00:09:02.399 ',' is used for single number separator. 00:09:02.399 '( )' can be omitted for single element group, 00:09:02.399 '@' can be omitted if cpus and lcores have the same value 00:09:02.399 -n, --mem-channels channel number of memory channels used for DPDK 00:09:02.399 -p, --main-core main (primary) core for DPDK 00:09:02.399 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:02.399 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:02.399 --disable-cpumask-locks Disable CPU core lock files. 00:09:02.399 --silence-noticelog disable notice level logging to stderr 00:09:02.399 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:02.399 -u, --no-pci disable PCI access 00:09:02.399 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:02.399 --max-delay maximum reactor delay (in microseconds) 00:09:02.399 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:02.399 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:02.399 -R, --huge-unlink unlink huge files after initialization 00:09:02.399 app_ut: invalid option -- 'z' 00:09:02.399 -v, --version print SPDK version 00:09:02.399 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:02.399 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:02.399 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:02.399 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:02.399 Tracepoints vary in size and can use more than one trace entry. 00:09:02.399 --rpcs-allowed comma-separated list of permitted RPCS 00:09:02.399 --env-context Opaque context for use of the env implementation 00:09:02.399 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:02.399 --no-huge run without using hugepages 00:09:02.399 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:02.399 -e, --tpoint-group [:] 00:09:02.399 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:02.399 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:02.399 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:02.399 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:02.399 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:02.399 app_ut [options] 00:09:02.399 options: 00:09:02.399 -c, --config JSON config file (default none) 00:09:02.399 --json JSON config file (default none) 00:09:02.399 --json-ignore-init-errors 00:09:02.399 don't exit on invalid config entry 00:09:02.399 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:02.399 -g, --single-file-segments 00:09:02.399 force creating just one hugetlbfs file 00:09:02.399 -h, --help show this usage 00:09:02.399 -i, --shm-id shared memory ID (optional) 00:09:02.399 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:02.399 --lcores lcore to CPU mapping list. The list is in the format: 00:09:02.399 [<,lcores[@CPUs]>...] 00:09:02.399 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:02.399 Within the group, '-' is used for range separator, 00:09:02.399 ',' is used for single number separator. 00:09:02.399 '( )' can be omitted for single element group, 00:09:02.399 '@' can be omitted if cpus and lcores have the same value 00:09:02.399 -n, --mem-channels channel number of memory channels used for DPDK 00:09:02.399 app_ut: unrecognized option '--test-long-opt' 00:09:02.399 -p, --main-core main (primary) core for DPDK 00:09:02.399 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:02.399 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:02.399 --disable-cpumask-locks Disable CPU core lock files. 00:09:02.399 --silence-noticelog disable notice level logging to stderr 00:09:02.399 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:02.399 -u, --no-pci disable PCI access 00:09:02.399 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:02.399 --max-delay maximum reactor delay (in microseconds) 00:09:02.399 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:02.399 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:02.399 -R, --huge-unlink unlink huge files after initialization 00:09:02.399 -v, --version print SPDK version 00:09:02.399 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:02.399 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:02.399 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:02.399 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:02.399 Tracepoints vary in size and can use more than one trace entry. 00:09:02.399 --rpcs-allowed comma-separated list of permitted RPCS 00:09:02.399 --env-context Opaque context for use of the env implementation 00:09:02.399 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:02.399 --no-huge run without using hugepages 00:09:02.399 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:02.400 -e, --tpoint-group [:] 00:09:02.400 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:02.400 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:02.400 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:02.400 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:02.400 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:02.400 [2024-04-27 04:51:32.183496] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:02.400 [2024-04-27 04:51:32.183976] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:02.400 app_ut [options] 00:09:02.400 options: 00:09:02.400 -c, --config JSON config file (default none) 00:09:02.400 --json JSON config file (default none) 00:09:02.400 --json-ignore-init-errors 00:09:02.400 don't exit on invalid config entry 00:09:02.400 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:02.400 -g, --single-file-segments 00:09:02.400 force creating just one hugetlbfs file 00:09:02.400 -h, --help show this usage 00:09:02.400 -i, --shm-id shared memory ID (optional) 00:09:02.400 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:02.400 --lcores lcore to CPU mapping list. The list is in the format: 00:09:02.400 [<,lcores[@CPUs]>...] 00:09:02.400 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:02.400 Within the group, '-' is used for range separator, 00:09:02.400 ',' is used for single number separator. 00:09:02.400 '( )' can be omitted for single element group, 00:09:02.400 '@' can be omitted if cpus and lcores have the same value 00:09:02.400 -n, --mem-channels channel number of memory channels used for DPDK 00:09:02.400 -p, --main-core main (primary) core for DPDK 00:09:02.400 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:02.400 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:02.400 --disable-cpumask-locks Disable CPU core lock files. 00:09:02.400 --silence-noticelog disable notice level logging to stderr 00:09:02.400 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:02.400 -u, --no-pci disable PCI access 00:09:02.400 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:02.400 --max-delay maximum reactor delay (in microseconds) 00:09:02.400 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:02.400 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:02.400 -R, --huge-unlink unlink huge files after initialization 00:09:02.400 -v, --version print SPDK version 00:09:02.400 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:02.400 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:02.400 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:02.400 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:02.400 Tracepoints vary in size and can use more than one trace entry. 00:09:02.400 --rpcs-allowed comma-separated list of permitted RPCS 00:09:02.400 --env-context Opaque context for use of the env implementation 00:09:02.400 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:02.400 --no-huge run without using hugepages 00:09:02.400 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:09:02.400 -e, --tpoint-group [:] 00:09:02.400 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:02.400 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:02.400 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:02.400 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:02.400 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:02.400 [2024-04-27 04:51:32.184440] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:02.400 passed 00:09:02.400 00:09:02.400 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.400 suites 1 1 n/a 0 0 00:09:02.400 tests 1 1 1 0 0 00:09:02.400 asserts 8 8 8 0 n/a 00:09:02.400 00:09:02.400 Elapsed time = 0.003 seconds 00:09:02.400 04:51:32 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:02.400 00:09:02.400 00:09:02.400 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.400 http://cunit.sourceforge.net/ 00:09:02.400 00:09:02.400 00:09:02.400 Suite: app_suite 00:09:02.400 Test: test_create_reactor ...passed 00:09:02.400 Test: test_init_reactors ...passed 00:09:02.400 Test: test_event_call ...passed 00:09:02.400 Test: test_schedule_thread ...passed 00:09:02.400 Test: test_reschedule_thread ...passed 00:09:02.400 Test: test_bind_thread ...passed 00:09:02.400 Test: test_for_each_reactor ...passed 00:09:02.400 Test: test_reactor_stats ...passed 00:09:02.400 Test: test_scheduler ...passed 00:09:02.400 Test: test_governor ...passed 00:09:02.400 00:09:02.400 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.400 suites 1 1 n/a 0 0 00:09:02.400 tests 10 10 10 0 0 00:09:02.400 asserts 344 344 344 0 n/a 00:09:02.400 00:09:02.400 Elapsed time = 0.017 seconds 00:09:02.400 00:09:02.400 real 0m0.094s 00:09:02.400 user 0m0.060s 00:09:02.400 sys 0m0.035s 00:09:02.400 04:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.400 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.400 ************************************ 00:09:02.400 END TEST unittest_event 00:09:02.400 ************************************ 00:09:02.659 04:51:32 -- unit/unittest.sh@233 -- # uname -s 00:09:02.659 04:51:32 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:09:02.659 04:51:32 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:09:02.659 04:51:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:02.659 04:51:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.659 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 ************************************ 00:09:02.659 START TEST unittest_ftl 00:09:02.659 ************************************ 00:09:02.659 04:51:32 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:09:02.659 04:51:32 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:02.659 00:09:02.659 00:09:02.659 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.659 http://cunit.sourceforge.net/ 00:09:02.659 00:09:02.659 00:09:02.659 Suite: ftl_band_suite 00:09:02.659 Test: test_band_block_offset_from_addr_base ...passed 00:09:02.659 Test: test_band_block_offset_from_addr_offset ...passed 00:09:02.659 Test: test_band_addr_from_block_offset ...passed 00:09:02.659 Test: test_band_set_addr ...passed 00:09:02.659 Test: test_invalidate_addr ...passed 00:09:02.659 Test: test_next_xfer_addr ...passed 00:09:02.659 00:09:02.659 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.659 suites 1 1 n/a 0 0 00:09:02.659 tests 6 6 6 0 0 00:09:02.659 asserts 30356 30356 30356 0 n/a 00:09:02.659 00:09:02.659 Elapsed time = 0.214 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:02.918 00:09:02.918 00:09:02.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.918 http://cunit.sourceforge.net/ 00:09:02.918 00:09:02.918 00:09:02.918 Suite: ftl_bitmap 00:09:02.918 Test: test_ftl_bitmap_create ...[2024-04-27 04:51:32.650898] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:02.918 [2024-04-27 04:51:32.651381] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:02.918 passed 00:09:02.918 Test: test_ftl_bitmap_get ...passed 00:09:02.918 Test: test_ftl_bitmap_set ...passed 00:09:02.918 Test: test_ftl_bitmap_clear ...passed 00:09:02.918 Test: test_ftl_bitmap_find_first_set ...passed 00:09:02.918 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:02.918 Test: test_ftl_bitmap_count_set ...passed 00:09:02.918 00:09:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.918 suites 1 1 n/a 0 0 00:09:02.918 tests 7 7 7 0 0 00:09:02.918 asserts 137 137 137 0 n/a 00:09:02.918 00:09:02.918 Elapsed time = 0.001 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:02.918 00:09:02.918 00:09:02.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.918 http://cunit.sourceforge.net/ 00:09:02.918 00:09:02.918 00:09:02.918 Suite: ftl_io_suite 00:09:02.918 Test: test_completion ...passed 00:09:02.918 Test: test_multiple_ios ...passed 00:09:02.918 00:09:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.918 suites 1 1 n/a 0 0 00:09:02.918 tests 2 2 2 0 0 00:09:02.918 asserts 47 47 47 0 n/a 00:09:02.918 00:09:02.918 Elapsed time = 0.004 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:02.918 00:09:02.918 00:09:02.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.918 http://cunit.sourceforge.net/ 00:09:02.918 00:09:02.918 00:09:02.918 Suite: ftl_mngt 00:09:02.918 Test: test_next_step ...passed 00:09:02.918 Test: test_continue_step ...passed 00:09:02.918 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:02.918 Test: test_fail_step ...passed 00:09:02.918 Test: test_mngt_call_and_call_rollback ...passed 00:09:02.918 Test: test_nested_process_failure ...passed 00:09:02.918 00:09:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.918 suites 1 1 n/a 0 0 00:09:02.918 tests 6 6 6 0 0 00:09:02.918 asserts 176 176 176 0 n/a 00:09:02.918 00:09:02.918 Elapsed time = 0.002 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:02.918 00:09:02.918 00:09:02.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.918 http://cunit.sourceforge.net/ 00:09:02.918 00:09:02.918 00:09:02.918 Suite: ftl_mempool 00:09:02.918 Test: test_ftl_mempool_create ...passed 00:09:02.918 Test: test_ftl_mempool_get_put ...passed 00:09:02.918 00:09:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.918 suites 1 1 n/a 0 0 00:09:02.918 tests 2 2 2 0 0 00:09:02.918 asserts 36 36 36 0 n/a 00:09:02.918 00:09:02.918 Elapsed time = 0.000 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:02.918 00:09:02.918 00:09:02.918 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.918 http://cunit.sourceforge.net/ 00:09:02.918 00:09:02.918 00:09:02.918 Suite: ftl_addr64_suite 00:09:02.918 Test: test_addr_cached ...passed 00:09:02.918 00:09:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.918 suites 1 1 n/a 0 0 00:09:02.918 tests 1 1 1 0 0 00:09:02.918 asserts 1536 1536 1536 0 n/a 00:09:02.918 00:09:02.918 Elapsed time = 0.000 seconds 00:09:02.918 04:51:32 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:03.190 00:09:03.190 00:09:03.190 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.190 http://cunit.sourceforge.net/ 00:09:03.190 00:09:03.190 00:09:03.190 Suite: ftl_sb 00:09:03.190 Test: test_sb_crc_v2 ...passed 00:09:03.190 Test: test_sb_crc_v3 ...passed 00:09:03.190 Test: test_sb_v3_md_layout ...[2024-04-27 04:51:32.816124] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:03.190 [2024-04-27 04:51:32.816938] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:03.190 [2024-04-27 04:51:32.817030] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:03.190 [2024-04-27 04:51:32.817103] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:03.190 [2024-04-27 04:51:32.817166] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:03.190 [2024-04-27 04:51:32.817312] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:03.190 [2024-04-27 04:51:32.817404] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:03.190 [2024-04-27 04:51:32.817510] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:03.190 [2024-04-27 04:51:32.817628] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:03.190 [2024-04-27 04:51:32.817733] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:03.190 [2024-04-27 04:51:32.817810] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:03.190 passed 00:09:03.190 Test: test_sb_v5_md_layout ...passed 00:09:03.190 00:09:03.190 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.190 suites 1 1 n/a 0 0 00:09:03.190 tests 4 4 4 0 0 00:09:03.190 asserts 148 148 148 0 n/a 00:09:03.190 00:09:03.190 Elapsed time = 0.003 seconds 00:09:03.190 04:51:32 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:03.190 00:09:03.190 00:09:03.190 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.190 http://cunit.sourceforge.net/ 00:09:03.190 00:09:03.190 00:09:03.190 Suite: ftl_layout_upgrade 00:09:03.190 Test: test_l2p_upgrade ...passed 00:09:03.190 00:09:03.190 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.190 suites 1 1 n/a 0 0 00:09:03.190 tests 1 1 1 0 0 00:09:03.190 asserts 140 140 140 0 n/a 00:09:03.190 00:09:03.190 Elapsed time = 0.001 seconds 00:09:03.190 00:09:03.190 real 0m0.551s 00:09:03.190 user 0m0.244s 00:09:03.190 sys 0m0.310s 00:09:03.190 04:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.190 ************************************ 00:09:03.190 END TEST unittest_ftl 00:09:03.190 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:03.190 ************************************ 00:09:03.190 04:51:32 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:03.190 04:51:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.190 04:51:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.190 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:03.190 ************************************ 00:09:03.190 START TEST unittest_accel 00:09:03.190 ************************************ 00:09:03.190 04:51:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:03.190 00:09:03.190 00:09:03.190 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.190 http://cunit.sourceforge.net/ 00:09:03.190 00:09:03.190 00:09:03.190 Suite: accel_sequence 00:09:03.190 Test: test_sequence_fill_copy ...passed 00:09:03.190 Test: test_sequence_abort ...passed 00:09:03.190 Test: test_sequence_append_error ...passed 00:09:03.190 Test: test_sequence_completion_error ...[2024-04-27 04:51:32.952058] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f91500287c0 00:09:03.190 [2024-04-27 04:51:32.952512] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f91500287c0 00:09:03.190 [2024-04-27 04:51:32.952698] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f91500287c0 00:09:03.190 [2024-04-27 04:51:32.952770] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f91500287c0 00:09:03.190 passed 00:09:03.190 Test: test_sequence_decompress ...passed 00:09:03.190 Test: test_sequence_reverse ...passed 00:09:03.191 Test: test_sequence_copy_elision ...passed 00:09:03.191 Test: test_sequence_accel_buffers ...passed 00:09:03.191 Test: test_sequence_memory_domain ...[2024-04-27 04:51:32.964175] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:03.191 passed 00:09:03.191 Test: test_sequence_module_memory_domain ...[2024-04-27 04:51:32.964454] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:03.191 passed 00:09:03.191 Test: test_sequence_crypto ...passed 00:09:03.191 Test: test_sequence_driver ...[2024-04-27 04:51:32.971283] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f914f4007c0 using driver: ut 00:09:03.191 passed 00:09:03.191 Test: test_sequence_same_iovs ...[2024-04-27 04:51:32.971465] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f914f4007c0 through driver: ut 00:09:03.191 passed 00:09:03.191 Test: test_sequence_crc32 ...passed 00:09:03.191 Suite: accel 00:09:03.191 Test: test_spdk_accel_task_complete ...passed 00:09:03.191 Test: test_get_task ...passed 00:09:03.191 Test: test_spdk_accel_submit_copy ...passed 00:09:03.191 Test: test_spdk_accel_submit_dualcast ...passed 00:09:03.191 Test: test_spdk_accel_submit_compare ...passed 00:09:03.191 Test: test_spdk_accel_submit_fill ...passed 00:09:03.191 Test: test_spdk_accel_submit_crc32c ...[2024-04-27 04:51:32.976822] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:03.191 [2024-04-27 04:51:32.976922] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:03.191 passed 00:09:03.191 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:03.191 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:03.191 Test: test_spdk_accel_submit_xor ...passed 00:09:03.191 Test: test_spdk_accel_module_find_by_name ...passed 00:09:03.191 Test: test_spdk_accel_module_register ...passed 00:09:03.191 00:09:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.191 suites 2 2 n/a 0 0 00:09:03.191 tests 26 26 26 0 0 00:09:03.191 asserts 831 831 831 0 n/a 00:09:03.191 00:09:03.191 Elapsed time = 0.036 seconds 00:09:03.191 00:09:03.191 real 0m0.079s 00:09:03.191 user 0m0.042s 00:09:03.191 sys 0m0.038s 00:09:03.191 ************************************ 00:09:03.191 END TEST unittest_accel 00:09:03.191 ************************************ 00:09:03.191 04:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.191 04:51:32 -- common/autotest_common.sh@10 -- # set +x 00:09:03.191 04:51:33 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:03.191 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.191 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.191 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.191 ************************************ 00:09:03.191 START TEST unittest_ioat 00:09:03.191 ************************************ 00:09:03.191 04:51:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:03.191 00:09:03.191 00:09:03.191 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.191 http://cunit.sourceforge.net/ 00:09:03.191 00:09:03.191 00:09:03.191 Suite: ioat 00:09:03.191 Test: ioat_state_check ...passed 00:09:03.191 00:09:03.191 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.191 suites 1 1 n/a 0 0 00:09:03.191 tests 1 1 1 0 0 00:09:03.191 asserts 32 32 32 0 n/a 00:09:03.191 00:09:03.191 Elapsed time = 0.000 seconds 00:09:03.191 00:09:03.191 real 0m0.028s 00:09:03.191 user 0m0.020s 00:09:03.191 sys 0m0.008s 00:09:03.191 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.191 ************************************ 00:09:03.191 END TEST unittest_ioat 00:09:03.191 ************************************ 00:09:03.191 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.449 04:51:33 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:03.449 04:51:33 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:03.449 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.449 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.449 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.449 ************************************ 00:09:03.449 START TEST unittest_idxd_user 00:09:03.449 ************************************ 00:09:03.449 04:51:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:03.449 00:09:03.449 00:09:03.449 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.449 http://cunit.sourceforge.net/ 00:09:03.449 00:09:03.449 00:09:03.449 Suite: idxd_user 00:09:03.449 Test: test_idxd_wait_cmd ...[2024-04-27 04:51:33.158934] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:03.449 passed 00:09:03.449 Test: test_idxd_reset_dev ...passed 00:09:03.449 Test: test_idxd_group_config ...passed 00:09:03.449 Test: test_idxd_wq_config ...passed[2024-04-27 04:51:33.159945] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:03.449 [2024-04-27 04:51:33.160269] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:03.449 [2024-04-27 04:51:33.160373] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:03.449 00:09:03.449 00:09:03.449 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.449 suites 1 1 n/a 0 0 00:09:03.449 tests 4 4 4 0 0 00:09:03.449 asserts 20 20 20 0 n/a 00:09:03.449 00:09:03.449 Elapsed time = 0.001 seconds 00:09:03.449 00:09:03.449 real 0m0.038s 00:09:03.449 user 0m0.014s 00:09:03.449 sys 0m0.025s 00:09:03.449 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.449 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.449 ************************************ 00:09:03.449 END TEST unittest_idxd_user 00:09:03.449 ************************************ 00:09:03.449 04:51:33 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:09:03.450 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.450 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.450 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.450 ************************************ 00:09:03.450 START TEST unittest_iscsi 00:09:03.450 ************************************ 00:09:03.450 04:51:33 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:09:03.450 04:51:33 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:03.450 00:09:03.450 00:09:03.450 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.450 http://cunit.sourceforge.net/ 00:09:03.450 00:09:03.450 00:09:03.450 Suite: conn_suite 00:09:03.450 Test: read_task_split_in_order_case ...passed 00:09:03.450 Test: read_task_split_reverse_order_case ...passed 00:09:03.450 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:03.450 Test: process_non_read_task_completion_test ...passed 00:09:03.450 Test: free_tasks_on_connection ...passed 00:09:03.450 Test: free_tasks_with_queued_datain ...passed 00:09:03.450 Test: abort_queued_datain_task_test ...passed 00:09:03.450 Test: abort_queued_datain_tasks_test ...passed 00:09:03.450 00:09:03.450 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.450 suites 1 1 n/a 0 0 00:09:03.450 tests 8 8 8 0 0 00:09:03.450 asserts 230 230 230 0 n/a 00:09:03.450 00:09:03.450 Elapsed time = 0.000 seconds 00:09:03.450 04:51:33 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:03.450 00:09:03.450 00:09:03.450 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.450 http://cunit.sourceforge.net/ 00:09:03.450 00:09:03.450 00:09:03.450 Suite: iscsi_suite 00:09:03.450 Test: param_negotiation_test ...passed 00:09:03.450 Test: list_negotiation_test ...passed 00:09:03.450 Test: parse_valid_test ...passed 00:09:03.450 Test: parse_invalid_test ...[2024-04-27 04:51:33.299612] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:09:03.450 [2024-04-27 04:51:33.300073] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:09:03.450 [2024-04-27 04:51:33.300139] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:09:03.450 [2024-04-27 04:51:33.300224] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:03.450 [2024-04-27 04:51:33.300420] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:03.450 [2024-04-27 04:51:33.300489] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:03.450 [2024-04-27 04:51:33.300680] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:03.450 passed 00:09:03.450 00:09:03.450 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.450 suites 1 1 n/a 0 0 00:09:03.450 tests 4 4 4 0 0 00:09:03.450 asserts 161 161 161 0 n/a 00:09:03.450 00:09:03.450 Elapsed time = 0.005 seconds 00:09:03.450 04:51:33 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:03.450 00:09:03.450 00:09:03.450 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.450 http://cunit.sourceforge.net/ 00:09:03.450 00:09:03.450 00:09:03.450 Suite: iscsi_target_node_suite 00:09:03.450 Test: add_lun_test_cases ...[2024-04-27 04:51:33.336734] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:03.450 [2024-04-27 04:51:33.337195] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:03.450 [2024-04-27 04:51:33.337346] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:03.450 [2024-04-27 04:51:33.337413] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:03.450 [2024-04-27 04:51:33.337457] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:03.450 passed 00:09:03.450 Test: allow_any_allowed ...passed 00:09:03.450 Test: allow_ipv6_allowed ...passed 00:09:03.450 Test: allow_ipv6_denied ...passed 00:09:03.450 Test: allow_ipv6_invalid ...passed 00:09:03.450 Test: allow_ipv4_allowed ...passed 00:09:03.450 Test: allow_ipv4_denied ...passed 00:09:03.450 Test: allow_ipv4_invalid ...passed 00:09:03.450 Test: node_access_allowed ...passed 00:09:03.450 Test: node_access_denied_by_empty_netmask ...passed 00:09:03.450 Test: node_access_multi_initiator_groups_cases ...passed 00:09:03.450 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:03.450 Test: chap_param_test_cases ...[2024-04-27 04:51:33.338029] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:03.450 [2024-04-27 04:51:33.338086] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:03.450 [2024-04-27 04:51:33.338167] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:03.450 [2024-04-27 04:51:33.338211] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:03.450 [2024-04-27 04:51:33.338258] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:03.450 passed 00:09:03.450 00:09:03.450 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.450 suites 1 1 n/a 0 0 00:09:03.450 tests 13 13 13 0 0 00:09:03.450 asserts 50 50 50 0 n/a 00:09:03.450 00:09:03.450 Elapsed time = 0.002 seconds 00:09:03.709 04:51:33 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:03.709 00:09:03.709 00:09:03.709 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.709 http://cunit.sourceforge.net/ 00:09:03.709 00:09:03.709 00:09:03.709 Suite: iscsi_suite 00:09:03.709 Test: op_login_check_target_test ...[2024-04-27 04:51:33.380337] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:09:03.709 passed 00:09:03.709 Test: op_login_session_normal_test ...[2024-04-27 04:51:33.380961] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:03.709 [2024-04-27 04:51:33.381044] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:03.709 [2024-04-27 04:51:33.381106] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:03.709 [2024-04-27 04:51:33.381181] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:03.709 [2024-04-27 04:51:33.381351] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:03.709 [2024-04-27 04:51:33.381516] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:03.709 passed 00:09:03.709 Test: maxburstlength_test ...[2024-04-27 04:51:33.381597] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:03.709 [2024-04-27 04:51:33.381914] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:03.709 [2024-04-27 04:51:33.381995] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:03.709 passed 00:09:03.709 Test: underflow_for_read_transfer_test ...passed 00:09:03.709 Test: underflow_for_zero_read_transfer_test ...passed 00:09:03.709 Test: underflow_for_request_sense_test ...passed 00:09:03.709 Test: underflow_for_check_condition_test ...passed 00:09:03.709 Test: add_transfer_task_test ...passed 00:09:03.709 Test: get_transfer_task_test ...passed 00:09:03.709 Test: del_transfer_task_test ...passed 00:09:03.709 Test: clear_all_transfer_tasks_test ...passed 00:09:03.709 Test: build_iovs_test ...passed 00:09:03.709 Test: build_iovs_with_md_test ...passed 00:09:03.709 Test: pdu_hdr_op_login_test ...[2024-04-27 04:51:33.383695] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:03.709 [2024-04-27 04:51:33.383840] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:03.709 [2024-04-27 04:51:33.383940] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:03.709 passed 00:09:03.709 Test: pdu_hdr_op_text_test ...[2024-04-27 04:51:33.384055] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:03.709 [2024-04-27 04:51:33.384176] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:03.709 [2024-04-27 04:51:33.384237] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:03.709 passed 00:09:03.709 Test: pdu_hdr_op_logout_test ...[2024-04-27 04:51:33.384329] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:03.709 passed 00:09:03.709 Test: pdu_hdr_op_scsi_test ...[2024-04-27 04:51:33.384542] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:03.709 [2024-04-27 04:51:33.384649] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:03.709 [2024-04-27 04:51:33.384727] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:03.709 [2024-04-27 04:51:33.384843] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:03.709 [2024-04-27 04:51:33.384951] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:03.709 [2024-04-27 04:51:33.385153] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:03.709 passed 00:09:03.709 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-27 04:51:33.385284] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:03.709 [2024-04-27 04:51:33.385391] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:03.709 passed 00:09:03.710 Test: pdu_hdr_op_nopout_test ...[2024-04-27 04:51:33.385652] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:03.710 [2024-04-27 04:51:33.385775] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:03.710 [2024-04-27 04:51:33.385823] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:03.710 [2024-04-27 04:51:33.385870] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:03.710 passed 00:09:03.710 Test: pdu_hdr_op_data_test ...[2024-04-27 04:51:33.385915] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:03.710 [2024-04-27 04:51:33.385993] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:03.710 [2024-04-27 04:51:33.386066] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:03.710 [2024-04-27 04:51:33.386157] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:03.710 [2024-04-27 04:51:33.386237] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:03.710 [2024-04-27 04:51:33.386369] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:03.710 [2024-04-27 04:51:33.386426] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:03.710 passed 00:09:03.710 Test: empty_text_with_cbit_test ...passed 00:09:03.710 Test: pdu_payload_read_test ...[2024-04-27 04:51:33.388830] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:03.710 passed 00:09:03.710 Test: data_out_pdu_sequence_test ...passed 00:09:03.710 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:03.710 00:09:03.710 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.710 suites 1 1 n/a 0 0 00:09:03.710 tests 24 24 24 0 0 00:09:03.710 asserts 150253 150253 150253 0 n/a 00:09:03.710 00:09:03.710 Elapsed time = 0.019 seconds 00:09:03.710 04:51:33 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:03.710 00:09:03.710 00:09:03.710 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.710 http://cunit.sourceforge.net/ 00:09:03.710 00:09:03.710 00:09:03.710 Suite: init_grp_suite 00:09:03.710 Test: create_initiator_group_success_case ...passed 00:09:03.710 Test: find_initiator_group_success_case ...passed 00:09:03.710 Test: register_initiator_group_twice_case ...passed 00:09:03.710 Test: add_initiator_name_success_case ...passed 00:09:03.710 Test: add_initiator_name_fail_case ...[2024-04-27 04:51:33.436596] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:03.710 passed 00:09:03.710 Test: delete_all_initiator_names_success_case ...passed 00:09:03.710 Test: add_netmask_success_case ...passed 00:09:03.710 Test: add_netmask_fail_case ...passed 00:09:03.710 Test: delete_all_netmasks_success_case ...[2024-04-27 04:51:33.437218] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:03.710 passed 00:09:03.710 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:03.710 Test: netmask_overwrite_all_to_any_case ...passed 00:09:03.710 Test: add_delete_initiator_names_case ...passed 00:09:03.710 Test: add_duplicated_initiator_names_case ...passed 00:09:03.710 Test: delete_nonexisting_initiator_names_case ...passed 00:09:03.710 Test: add_delete_netmasks_case ...passed 00:09:03.710 Test: add_duplicated_netmasks_case ...passed 00:09:03.710 Test: delete_nonexisting_netmasks_case ...passed 00:09:03.710 00:09:03.710 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.710 suites 1 1 n/a 0 0 00:09:03.710 tests 17 17 17 0 0 00:09:03.710 asserts 108 108 108 0 n/a 00:09:03.710 00:09:03.710 Elapsed time = 0.001 seconds 00:09:03.710 04:51:33 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:03.710 00:09:03.710 00:09:03.710 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.710 http://cunit.sourceforge.net/ 00:09:03.710 00:09:03.710 00:09:03.710 Suite: portal_grp_suite 00:09:03.710 Test: portal_create_ipv4_normal_case ...passed 00:09:03.710 Test: portal_create_ipv6_normal_case ...passed 00:09:03.710 Test: portal_create_ipv4_wildcard_case ...passed 00:09:03.710 Test: portal_create_ipv6_wildcard_case ...passed 00:09:03.710 Test: portal_create_twice_case ...[2024-04-27 04:51:33.473320] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:03.710 passed 00:09:03.710 Test: portal_grp_register_unregister_case ...passed 00:09:03.710 Test: portal_grp_register_twice_case ...passed 00:09:03.710 Test: portal_grp_add_delete_case ...passed 00:09:03.710 Test: portal_grp_add_delete_twice_case ...passed 00:09:03.710 00:09:03.710 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.710 suites 1 1 n/a 0 0 00:09:03.710 tests 9 9 9 0 0 00:09:03.710 asserts 44 44 44 0 n/a 00:09:03.710 00:09:03.710 Elapsed time = 0.004 seconds 00:09:03.710 00:09:03.710 real 0m0.257s 00:09:03.710 user 0m0.126s 00:09:03.710 sys 0m0.134s 00:09:03.710 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.710 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.710 ************************************ 00:09:03.710 END TEST unittest_iscsi 00:09:03.710 ************************************ 00:09:03.710 04:51:33 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:09:03.710 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.710 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.710 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.710 ************************************ 00:09:03.710 START TEST unittest_json 00:09:03.710 ************************************ 00:09:03.710 04:51:33 -- common/autotest_common.sh@1104 -- # unittest_json 00:09:03.710 04:51:33 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:03.710 00:09:03.710 00:09:03.710 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.710 http://cunit.sourceforge.net/ 00:09:03.710 00:09:03.710 00:09:03.710 Suite: json 00:09:03.710 Test: test_parse_literal ...passed 00:09:03.710 Test: test_parse_string_simple ...passed 00:09:03.710 Test: test_parse_string_control_chars ...passed 00:09:03.710 Test: test_parse_string_utf8 ...passed 00:09:03.710 Test: test_parse_string_escapes_twochar ...passed 00:09:03.710 Test: test_parse_string_escapes_unicode ...passed 00:09:03.710 Test: test_parse_number ...passed 00:09:03.710 Test: test_parse_array ...passed 00:09:03.710 Test: test_parse_object ...passed 00:09:03.710 Test: test_parse_nesting ...passed 00:09:03.710 Test: test_parse_comment ...passed 00:09:03.710 00:09:03.710 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.710 suites 1 1 n/a 0 0 00:09:03.710 tests 11 11 11 0 0 00:09:03.710 asserts 1516 1516 1516 0 n/a 00:09:03.710 00:09:03.710 Elapsed time = 0.002 seconds 00:09:03.710 04:51:33 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:03.710 00:09:03.710 00:09:03.710 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.710 http://cunit.sourceforge.net/ 00:09:03.710 00:09:03.710 00:09:03.710 Suite: json 00:09:03.710 Test: test_strequal ...passed 00:09:03.710 Test: test_num_to_uint16 ...passed 00:09:03.710 Test: test_num_to_int32 ...passed 00:09:03.710 Test: test_num_to_uint64 ...passed 00:09:03.710 Test: test_decode_object ...passed 00:09:03.710 Test: test_decode_array ...passed 00:09:03.710 Test: test_decode_bool ...passed 00:09:03.710 Test: test_decode_uint16 ...passed 00:09:03.710 Test: test_decode_int32 ...passed 00:09:03.710 Test: test_decode_uint32 ...passed 00:09:03.710 Test: test_decode_uint64 ...passed 00:09:03.710 Test: test_decode_string ...passed 00:09:03.710 Test: test_decode_uuid ...passed 00:09:03.710 Test: test_find ...passed 00:09:03.710 Test: test_find_array ...passed 00:09:03.710 Test: test_iterating ...passed 00:09:03.710 Test: test_free_object ...passed 00:09:03.710 00:09:03.710 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.710 suites 1 1 n/a 0 0 00:09:03.710 tests 17 17 17 0 0 00:09:03.710 asserts 236 236 236 0 n/a 00:09:03.710 00:09:03.710 Elapsed time = 0.001 seconds 00:09:03.969 04:51:33 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:03.969 00:09:03.969 00:09:03.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.969 http://cunit.sourceforge.net/ 00:09:03.969 00:09:03.969 00:09:03.969 Suite: json 00:09:03.969 Test: test_write_literal ...passed 00:09:03.969 Test: test_write_string_simple ...passed 00:09:03.969 Test: test_write_string_escapes ...passed 00:09:03.969 Test: test_write_string_utf16le ...passed 00:09:03.969 Test: test_write_number_int32 ...passed 00:09:03.969 Test: test_write_number_uint32 ...passed 00:09:03.969 Test: test_write_number_uint128 ...passed 00:09:03.969 Test: test_write_string_number_uint128 ...passed 00:09:03.969 Test: test_write_number_int64 ...passed 00:09:03.969 Test: test_write_number_uint64 ...passed 00:09:03.969 Test: test_write_number_double ...passed 00:09:03.969 Test: test_write_uuid ...passed 00:09:03.969 Test: test_write_array ...passed 00:09:03.969 Test: test_write_object ...passed 00:09:03.969 Test: test_write_nesting ...passed 00:09:03.969 Test: test_write_val ...passed 00:09:03.969 00:09:03.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.969 suites 1 1 n/a 0 0 00:09:03.969 tests 16 16 16 0 0 00:09:03.969 asserts 918 918 918 0 n/a 00:09:03.969 00:09:03.969 Elapsed time = 0.005 seconds 00:09:03.969 04:51:33 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:03.969 00:09:03.969 00:09:03.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.969 http://cunit.sourceforge.net/ 00:09:03.969 00:09:03.969 00:09:03.969 Suite: jsonrpc 00:09:03.969 Test: test_parse_request ...passed 00:09:03.969 Test: test_parse_request_streaming ...passed 00:09:03.969 00:09:03.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.969 suites 1 1 n/a 0 0 00:09:03.969 tests 2 2 2 0 0 00:09:03.969 asserts 289 289 289 0 n/a 00:09:03.969 00:09:03.969 Elapsed time = 0.005 seconds 00:09:03.969 00:09:03.969 real 0m0.141s 00:09:03.969 user 0m0.075s 00:09:03.969 sys 0m0.068s 00:09:03.969 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.969 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.969 ************************************ 00:09:03.969 END TEST unittest_json 00:09:03.969 ************************************ 00:09:03.969 04:51:33 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:09:03.969 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.969 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.969 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.969 ************************************ 00:09:03.969 START TEST unittest_rpc 00:09:03.969 ************************************ 00:09:03.969 04:51:33 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:09:03.969 04:51:33 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:03.969 00:09:03.969 00:09:03.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.969 http://cunit.sourceforge.net/ 00:09:03.969 00:09:03.969 00:09:03.969 Suite: rpc 00:09:03.969 Test: test_jsonrpc_handler ...passed 00:09:03.969 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:03.969 Test: test_rpc_get_methods ...[2024-04-27 04:51:33.759079] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:03.969 passed 00:09:03.969 Test: test_rpc_spdk_get_version ...passed 00:09:03.969 Test: test_spdk_rpc_listen_close ...passed 00:09:03.969 00:09:03.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.969 suites 1 1 n/a 0 0 00:09:03.969 tests 5 5 5 0 0 00:09:03.969 asserts 20 20 20 0 n/a 00:09:03.969 00:09:03.969 Elapsed time = 0.001 seconds 00:09:03.969 00:09:03.969 real 0m0.032s 00:09:03.969 user 0m0.021s 00:09:03.969 sys 0m0.011s 00:09:03.969 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.969 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.969 ************************************ 00:09:03.969 END TEST unittest_rpc 00:09:03.969 ************************************ 00:09:03.969 04:51:33 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:03.969 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.969 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.969 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.969 ************************************ 00:09:03.969 START TEST unittest_notify 00:09:03.969 ************************************ 00:09:03.969 04:51:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:03.969 00:09:03.969 00:09:03.969 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.969 http://cunit.sourceforge.net/ 00:09:03.969 00:09:03.969 00:09:03.969 Suite: app_suite 00:09:03.969 Test: notify ...passed 00:09:03.969 00:09:03.969 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.969 suites 1 1 n/a 0 0 00:09:03.969 tests 1 1 1 0 0 00:09:03.969 asserts 13 13 13 0 n/a 00:09:03.969 00:09:03.969 Elapsed time = 0.000 seconds 00:09:03.969 00:09:03.969 real 0m0.026s 00:09:03.969 user 0m0.013s 00:09:03.969 sys 0m0.014s 00:09:03.969 04:51:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.969 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:03.969 ************************************ 00:09:03.969 END TEST unittest_notify 00:09:03.969 ************************************ 00:09:04.229 04:51:33 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:09:04.229 04:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.229 04:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.229 04:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:04.229 ************************************ 00:09:04.229 START TEST unittest_nvme 00:09:04.229 ************************************ 00:09:04.229 04:51:33 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:09:04.229 04:51:33 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:04.229 00:09:04.229 00:09:04.229 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.229 http://cunit.sourceforge.net/ 00:09:04.229 00:09:04.229 00:09:04.229 Suite: nvme 00:09:04.229 Test: test_opc_data_transfer ...passed 00:09:04.229 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:04.229 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:04.229 Test: test_trid_parse_and_compare ...[2024-04-27 04:51:33.924286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:04.229 passed 00:09:04.229 Test: test_trid_trtype_str ...passed 00:09:04.229 Test: test_trid_adrfam_str ...passed 00:09:04.229 Test: test_nvme_ctrlr_probe ...passed 00:09:04.229 Test: test_spdk_nvme_probe ...passed 00:09:04.229 Test: test_spdk_nvme_connect ...passed 00:09:04.229 Test: test_nvme_ctrlr_probe_internal ...passed 00:09:04.229 Test: test_nvme_init_controllers ...passed 00:09:04.229 Test: test_nvme_driver_init ...[2024-04-27 04:51:33.924720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:04.229 [2024-04-27 04:51:33.924863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:04.229 [2024-04-27 04:51:33.924920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:04.229 [2024-04-27 04:51:33.924967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:09:04.229 [2024-04-27 04:51:33.925086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:04.229 [2024-04-27 04:51:33.925375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:04.229 [2024-04-27 04:51:33.925513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:04.229 [2024-04-27 04:51:33.925563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:04.229 [2024-04-27 04:51:33.925693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:04.229 [2024-04-27 04:51:33.925751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:04.229 [2024-04-27 04:51:33.925861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:04.229 [2024-04-27 04:51:33.926327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:04.229 [2024-04-27 04:51:33.926415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:09:04.229 [2024-04-27 04:51:33.926594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:04.229 [2024-04-27 04:51:33.926654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:04.229 [2024-04-27 04:51:33.926756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:04.229 [2024-04-27 04:51:33.926914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:04.229 [2024-04-27 04:51:33.926963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:04.229 [2024-04-27 04:51:34.039998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:04.229 [2024-04-27 04:51:34.040637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:09:04.229 passed 00:09:04.229 Test: test_spdk_nvme_detach ...passed 00:09:04.229 Test: test_nvme_completion_poll_cb ...passed 00:09:04.229 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:04.229 Test: test_nvme_allocate_request_null ...passed 00:09:04.229 Test: test_nvme_allocate_request ...passed 00:09:04.229 Test: test_nvme_free_request ...passed 00:09:04.229 Test: test_nvme_allocate_request_user_copy ...passed 00:09:04.229 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:04.229 Test: test_nvme_request_check_timeout ...passed 00:09:04.229 Test: test_nvme_wait_for_completion ...passed 00:09:04.229 Test: test_spdk_nvme_parse_func ...passed 00:09:04.229 Test: test_spdk_nvme_detach_async ...passed 00:09:04.229 Test: test_nvme_parse_addr ...[2024-04-27 04:51:34.044849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:04.229 passed 00:09:04.229 00:09:04.229 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.229 suites 1 1 n/a 0 0 00:09:04.229 tests 25 25 25 0 0 00:09:04.229 asserts 326 326 326 0 n/a 00:09:04.229 00:09:04.229 Elapsed time = 0.008 seconds 00:09:04.229 04:51:34 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:04.229 00:09:04.229 00:09:04.229 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.229 http://cunit.sourceforge.net/ 00:09:04.229 00:09:04.229 00:09:04.229 Suite: nvme_ctrlr 00:09:04.229 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-27 04:51:34.083783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-27 04:51:34.085873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-27 04:51:34.087176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-27 04:51:34.088451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-27 04:51:34.089815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 [2024-04-27 04:51:34.091028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 04:51:34.092299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 04:51:34.093495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-27 04:51:34.096197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 [2024-04-27 04:51:34.098701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 04:51:34.100086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:04.229 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-27 04:51:34.102792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.229 [2024-04-27 04:51:34.104082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 04:51:34.106631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:04.230 Test: test_nvme_ctrlr_init_delay ...[2024-04-27 04:51:34.109290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.230 passed 00:09:04.230 Test: test_alloc_io_qpair_rr_1 ...[2024-04-27 04:51:34.110751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.230 [2024-04-27 04:51:34.111067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:04.230 [2024-04-27 04:51:34.111448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:04.230 [2024-04-27 04:51:34.111576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:04.230 passed 00:09:04.230 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-04-27 04:51:34.111712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:04.230 passed 00:09:04.230 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:04.230 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-27 04:51:34.112036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.230 passed 00:09:04.230 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-27 04:51:34.112391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.230 [2024-04-27 04:51:34.112689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:04.230 passed 00:09:04.230 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-27 04:51:34.113160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:04.230 [2024-04-27 04:51:34.113479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:04.230 [2024-04-27 04:51:34.113690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:04.230 [2024-04-27 04:51:34.113858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:04.230 passed 00:09:04.230 Test: test_nvme_ctrlr_fail ...[2024-04-27 04:51:34.114013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:04.230 passed 00:09:04.230 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:04.230 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:04.230 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:04.230 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-27 04:51:34.114627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:04.799 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:04.799 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:04.799 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-27 04:51:34.389898] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-27 04:51:34.397098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-27 04:51:34.398413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 [2024-04-27 04:51:34.398485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:04.799 passed 00:09:04.799 Test: test_alloc_io_qpair_fail ...[2024-04-27 04:51:34.399649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:04.799 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-04-27 04:51:34.399816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_set_state ...passed 00:09:04.799 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-27 04:51:34.399946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:04.799 [2024-04-27 04:51:34.399997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-27 04:51:34.423828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-27 04:51:34.468406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_reset ...[2024-04-27 04:51:34.470142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_aer_callback ...[2024-04-27 04:51:34.470605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-27 04:51:34.472098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:04.799 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:04.799 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-27 04:51:34.474045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:04.799 Test: test_nvme_ctrlr_ana_resize ...[2024-04-27 04:51:34.475585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:04.799 Test: test_nvme_transport_ctrlr_ready ...[2024-04-27 04:51:34.477175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:04.799 [2024-04-27 04:51:34.477245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:09:04.799 passed 00:09:04.799 Test: test_nvme_ctrlr_disable ...[2024-04-27 04:51:34.477300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:04.799 passed 00:09:04.799 00:09:04.799 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.799 suites 1 1 n/a 0 0 00:09:04.799 tests 43 43 43 0 0 00:09:04.799 asserts 10418 10418 10418 0 n/a 00:09:04.799 00:09:04.799 Elapsed time = 0.354 seconds 00:09:04.799 04:51:34 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:04.799 00:09:04.799 00:09:04.799 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.799 http://cunit.sourceforge.net/ 00:09:04.799 00:09:04.799 00:09:04.799 Suite: nvme_ctrlr_cmd 00:09:04.799 Test: test_get_log_pages ...passed 00:09:04.799 Test: test_set_feature_cmd ...passed 00:09:04.799 Test: test_set_feature_ns_cmd ...passed 00:09:04.799 Test: test_get_feature_cmd ...passed 00:09:04.799 Test: test_get_feature_ns_cmd ...passed 00:09:04.799 Test: test_abort_cmd ...passed 00:09:04.799 Test: test_set_host_id_cmds ...[2024-04-27 04:51:34.533286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:04.799 passed 00:09:04.800 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:04.800 Test: test_io_raw_cmd ...passed 00:09:04.800 Test: test_io_raw_cmd_with_md ...passed 00:09:04.800 Test: test_namespace_attach ...passed 00:09:04.800 Test: test_namespace_detach ...passed 00:09:04.800 Test: test_namespace_create ...passed 00:09:04.800 Test: test_namespace_delete ...passed 00:09:04.800 Test: test_doorbell_buffer_config ...passed 00:09:04.800 Test: test_format_nvme ...passed 00:09:04.800 Test: test_fw_commit ...passed 00:09:04.800 Test: test_fw_image_download ...passed 00:09:04.800 Test: test_sanitize ...passed 00:09:04.800 Test: test_directive ...passed 00:09:04.800 Test: test_nvme_request_add_abort ...passed 00:09:04.800 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:04.800 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:04.800 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:04.800 00:09:04.800 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.800 suites 1 1 n/a 0 0 00:09:04.800 tests 24 24 24 0 0 00:09:04.800 asserts 198 198 198 0 n/a 00:09:04.800 00:09:04.800 Elapsed time = 0.001 seconds 00:09:04.800 04:51:34 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:04.800 00:09:04.800 00:09:04.800 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.800 http://cunit.sourceforge.net/ 00:09:04.800 00:09:04.800 00:09:04.800 Suite: nvme_ctrlr_cmd 00:09:04.800 Test: test_geometry_cmd ...passed 00:09:04.800 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:04.800 00:09:04.800 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.800 suites 1 1 n/a 0 0 00:09:04.800 tests 2 2 2 0 0 00:09:04.800 asserts 7 7 7 0 n/a 00:09:04.800 00:09:04.800 Elapsed time = 0.000 seconds 00:09:04.800 04:51:34 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:04.800 00:09:04.800 00:09:04.800 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.800 http://cunit.sourceforge.net/ 00:09:04.800 00:09:04.800 00:09:04.800 Suite: nvme 00:09:04.800 Test: test_nvme_ns_construct ...passed 00:09:04.800 Test: test_nvme_ns_uuid ...passed 00:09:04.800 Test: test_nvme_ns_csi ...passed 00:09:04.800 Test: test_nvme_ns_data ...passed 00:09:04.800 Test: test_nvme_ns_set_identify_data ...passed 00:09:04.800 Test: test_spdk_nvme_ns_get_values ...passed 00:09:04.800 Test: test_spdk_nvme_ns_is_active ...passed 00:09:04.800 Test: spdk_nvme_ns_supports ...passed 00:09:04.800 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:04.800 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:04.800 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:04.800 Test: test_nvme_ns_find_id_desc ...passed 00:09:04.800 00:09:04.800 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.800 suites 1 1 n/a 0 0 00:09:04.800 tests 12 12 12 0 0 00:09:04.800 asserts 83 83 83 0 n/a 00:09:04.800 00:09:04.800 Elapsed time = 0.001 seconds 00:09:04.800 04:51:34 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:04.800 00:09:04.800 00:09:04.800 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.800 http://cunit.sourceforge.net/ 00:09:04.800 00:09:04.800 00:09:04.800 Suite: nvme_ns_cmd 00:09:04.800 Test: split_test ...passed 00:09:04.800 Test: split_test2 ...passed 00:09:04.800 Test: split_test3 ...passed 00:09:04.800 Test: split_test4 ...passed 00:09:04.800 Test: test_nvme_ns_cmd_flush ...passed 00:09:04.800 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:04.800 Test: test_nvme_ns_cmd_copy ...passed 00:09:04.800 Test: test_io_flags ...[2024-04-27 04:51:34.624617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:04.800 passed 00:09:04.800 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:04.800 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:04.800 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:04.800 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:04.800 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:04.800 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:04.800 Test: test_cmd_child_request ...passed 00:09:04.800 Test: test_nvme_ns_cmd_readv ...passed 00:09:04.800 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_writev ...[2024-04-27 04:51:34.625941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:04.800 passed 00:09:04.800 Test: test_nvme_ns_cmd_write_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_comparev ...passed 00:09:04.800 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:04.800 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:04.800 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:04.800 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:04.800 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-27 04:51:34.627973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:04.800 passed 00:09:04.800 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:09:04.800 Test: test_nvme_ns_cmd_verify ...[2024-04-27 04:51:34.628088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:04.800 passed 00:09:04.800 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:04.800 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:04.800 00:09:04.800 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.800 suites 1 1 n/a 0 0 00:09:04.800 tests 32 32 32 0 0 00:09:04.800 asserts 550 550 550 0 n/a 00:09:04.800 00:09:04.800 Elapsed time = 0.005 seconds 00:09:04.800 04:51:34 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:04.800 00:09:04.800 00:09:04.800 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.800 http://cunit.sourceforge.net/ 00:09:04.800 00:09:04.800 00:09:04.800 Suite: nvme_ns_cmd 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:04.800 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:04.800 00:09:04.800 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.800 suites 1 1 n/a 0 0 00:09:04.800 tests 12 12 12 0 0 00:09:04.800 asserts 123 123 123 0 n/a 00:09:04.800 00:09:04.800 Elapsed time = 0.002 seconds 00:09:04.800 04:51:34 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:04.800 00:09:04.800 00:09:04.800 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.800 http://cunit.sourceforge.net/ 00:09:04.800 00:09:04.800 00:09:04.800 Suite: nvme_qpair 00:09:05.065 Test: test3 ...passed 00:09:05.065 Test: test_ctrlr_failed ...passed 00:09:05.065 Test: struct_packing ...passed 00:09:05.065 Test: test_nvme_qpair_process_completions ...[2024-04-27 04:51:34.695205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:05.065 [2024-04-27 04:51:34.695839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:05.065 [2024-04-27 04:51:34.695939] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:05.065 passed 00:09:05.065 Test: test_nvme_completion_is_retry ...passed 00:09:05.065 Test: test_get_status_string ...passed 00:09:05.065 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-04-27 04:51:34.696069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:09:05.065 passed 00:09:05.065 Test: test_nvme_qpair_submit_request ...passed 00:09:05.065 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:05.065 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:05.065 Test: test_nvme_qpair_init_deinit ...[2024-04-27 04:51:34.696910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:05.065 passed 00:09:05.065 Test: test_nvme_get_sgl_print_info ...passed 00:09:05.065 00:09:05.065 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.065 suites 1 1 n/a 0 0 00:09:05.065 tests 12 12 12 0 0 00:09:05.065 asserts 154 154 154 0 n/a 00:09:05.065 00:09:05.065 Elapsed time = 0.002 seconds 00:09:05.065 04:51:34 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:05.065 00:09:05.065 00:09:05.065 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.065 http://cunit.sourceforge.net/ 00:09:05.065 00:09:05.065 00:09:05.065 Suite: nvme_pcie 00:09:05.065 Test: test_prp_list_append ...[2024-04-27 04:51:34.729391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:05.065 [2024-04-27 04:51:34.729805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:05.065 [2024-04-27 04:51:34.729862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:05.065 [2024-04-27 04:51:34.730154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:05.065 passed 00:09:05.065 Test: test_nvme_pcie_hotplug_monitor ...[2024-04-27 04:51:34.730267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:05.065 passed 00:09:05.065 Test: test_shadow_doorbell_update ...passed 00:09:05.065 Test: test_build_contig_hw_sgl_request ...passed 00:09:05.065 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:05.065 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:05.065 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:05.065 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-04-27 04:51:34.730456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:05.065 passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-04-27 04:51:34.730545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:05.065 [2024-04-27 04:51:34.730630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:05.065 [2024-04-27 04:51:34.730690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:05.065 passed 00:09:05.065 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:09:05.065 00:09:05.065 [2024-04-27 04:51:34.730744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:05.065 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.065 suites 1 1 n/a 0 0 00:09:05.065 tests 14 14 14 0 0 00:09:05.065 asserts 235 235 235 0 n/a 00:09:05.065 00:09:05.065 Elapsed time = 0.001 seconds 00:09:05.065 04:51:34 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:05.065 00:09:05.065 00:09:05.065 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.065 http://cunit.sourceforge.net/ 00:09:05.065 00:09:05.065 00:09:05.065 Suite: nvme_ns_cmd 00:09:05.065 Test: nvme_poll_group_create_test ...passed 00:09:05.065 Test: nvme_poll_group_add_remove_test ...passed 00:09:05.065 Test: nvme_poll_group_process_completions ...passed 00:09:05.065 Test: nvme_poll_group_destroy_test ...passed 00:09:05.065 Test: nvme_poll_group_get_free_stats ...passed 00:09:05.065 00:09:05.065 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.065 suites 1 1 n/a 0 0 00:09:05.065 tests 5 5 5 0 0 00:09:05.065 asserts 75 75 75 0 n/a 00:09:05.065 00:09:05.065 Elapsed time = 0.000 seconds 00:09:05.065 04:51:34 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:05.065 00:09:05.065 00:09:05.065 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.065 http://cunit.sourceforge.net/ 00:09:05.065 00:09:05.065 00:09:05.065 Suite: nvme_quirks 00:09:05.065 Test: test_nvme_quirks_striping ...passed 00:09:05.065 00:09:05.065 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.065 suites 1 1 n/a 0 0 00:09:05.065 tests 1 1 1 0 0 00:09:05.065 asserts 5 5 5 0 n/a 00:09:05.065 00:09:05.065 Elapsed time = 0.000 seconds 00:09:05.065 04:51:34 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:05.065 00:09:05.065 00:09:05.065 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.065 http://cunit.sourceforge.net/ 00:09:05.065 00:09:05.065 00:09:05.065 Suite: nvme_tcp 00:09:05.065 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:05.065 Test: test_nvme_tcp_build_iovs ...passed 00:09:05.065 Test: test_nvme_tcp_build_sgl_request ...[2024-04-27 04:51:34.829980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd4e0d6c20, and the iovcnt=16, remaining_size=28672 00:09:05.065 passed 00:09:05.065 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:09:05.065 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:05.065 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:05.065 Test: test_nvme_tcp_req_get ...passed 00:09:05.065 Test: test_nvme_tcp_req_init ...passed 00:09:05.065 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:05.065 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:05.065 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:09:05.065 Test: test_nvme_tcp_alloc_reqs ...[2024-04-27 04:51:34.830753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d8940 is same with the state(6) to be set 00:09:05.065 passed 00:09:05.065 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-04-27 04:51:34.831122] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7ad0 is same with the state(5) to be set 00:09:05.065 passed 00:09:05.065 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-27 04:51:34.831199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd4e0d8600 00:09:05.065 [2024-04-27 04:51:34.831256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:05.065 [2024-04-27 04:51:34.831367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.065 [2024-04-27 04:51:34.831458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:05.065 [2024-04-27 04:51:34.831561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.065 [2024-04-27 04:51:34.831612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:05.065 [2024-04-27 04:51:34.831661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.831713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.831773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.831845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.831890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-27 04:51:34.831943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7f90 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.832150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:05.066 [2024-04-27 04:51:34.832236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:05.066 [2024-04-27 04:51:34.832532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:05.066 Test: test_nvme_tcp_c2h_payload_handle ...[2024-04-27 04:51:34.832678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd4e0d8140): PDU Sequence Error 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_icresp_handle ...[2024-04-27 04:51:34.832838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:05.066 [2024-04-27 04:51:34.832887] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:05.066 [2024-04-27 04:51:34.832934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7ae0 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.832985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:05.066 [2024-04-27 04:51:34.833050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7ae0 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.833117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d7ae0 is same with the state(0) to be set 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:09:05.066 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:09:05.066 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-04-27 04:51:34.833179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd4e0d8600): PDU Sequence Error 00:09:05.066 [2024-04-27 04:51:34.833287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd4e0d6dc0 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-27 04:51:34.833441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd4e0d6440, errno=0, rc=0 00:09:05.066 [2024-04-27 04:51:34.833509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d6440 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.833591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4e0d6440 is same with the state(5) to be set 00:09:05.066 [2024-04-27 04:51:34.833655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd4e0d6440 (0): Success 00:09:05.066 passed 00:09:05.066 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-27 04:51:34.833701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd4e0d6440 (0): Success 00:09:05.334 [2024-04-27 04:51:34.997016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:05.334 [2024-04-27 04:51:34.997243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:05.334 passed 00:09:05.334 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:05.334 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:09:05.334 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-27 04:51:34.997570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:05.334 [2024-04-27 04:51:34.997644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:05.334 [2024-04-27 04:51:34.997929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:05.334 [2024-04-27 04:51:34.997988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:05.334 [2024-04-27 04:51:34.998154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:05.334 [2024-04-27 04:51:34.998243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:05.334 passed 00:09:05.334 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-27 04:51:34.998390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:09:05.334 [2024-04-27 04:51:34.998503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:05.334 [2024-04-27 04:51:34.998693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:09:05.334 [2024-04-27 04:51:34.998770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:05.334 passed 00:09:05.334 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 27 27 27 0 0 00:09:05.334 asserts 624 624 624 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.169 seconds 00:09:05.334 04:51:35 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:05.334 00:09:05.334 00:09:05.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.334 http://cunit.sourceforge.net/ 00:09:05.334 00:09:05.334 00:09:05.334 Suite: nvme_transport 00:09:05.334 Test: test_nvme_get_transport ...passed 00:09:05.334 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:05.334 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:05.334 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:05.334 Test: test_ctrlr_get_memory_domains ...passed 00:09:05.334 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 5 5 5 0 0 00:09:05.334 asserts 28 28 28 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.000 seconds 00:09:05.334 04:51:35 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:05.334 00:09:05.334 00:09:05.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.334 http://cunit.sourceforge.net/ 00:09:05.334 00:09:05.334 00:09:05.334 Suite: nvme_io_msg 00:09:05.334 Test: test_nvme_io_msg_send ...passed 00:09:05.334 Test: test_nvme_io_msg_process ...passed 00:09:05.334 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:05.334 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 3 3 3 0 0 00:09:05.334 asserts 56 56 56 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.000 seconds 00:09:05.334 04:51:35 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:05.334 00:09:05.334 00:09:05.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.334 http://cunit.sourceforge.net/ 00:09:05.334 00:09:05.334 00:09:05.334 Suite: nvme_pcie_common 00:09:05.334 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-27 04:51:35.121491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:05.334 passed 00:09:05.334 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:09:05.334 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:05.334 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-27 04:51:35.122490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:05.334 [2024-04-27 04:51:35.122735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:05.334 passed 00:09:05.334 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-04-27 04:51:35.122819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:05.334 passed 00:09:05.334 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-27 04:51:35.123393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:05.334 passed 00:09:05.334 00:09:05.334 [2024-04-27 04:51:35.123464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 6 6 6 0 0 00:09:05.334 asserts 148 148 148 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.002 seconds 00:09:05.334 04:51:35 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:05.334 00:09:05.334 00:09:05.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.334 http://cunit.sourceforge.net/ 00:09:05.334 00:09:05.334 00:09:05.334 Suite: nvme_fabric 00:09:05.334 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:05.334 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:05.334 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:05.334 Test: test_nvme_fabric_discover_probe ...passed 00:09:05.334 Test: test_nvme_fabric_qpair_connect ...[2024-04-27 04:51:35.155673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:05.334 passed 00:09:05.334 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 5 5 5 0 0 00:09:05.334 asserts 60 60 60 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.001 seconds 00:09:05.334 04:51:35 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:05.334 00:09:05.334 00:09:05.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.334 http://cunit.sourceforge.net/ 00:09:05.334 00:09:05.334 00:09:05.334 Suite: nvme_opal 00:09:05.334 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:05.334 Test: test_opal_add_short_atom_header ...passed 00:09:05.334 00:09:05.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.334 suites 1 1 n/a 0 0 00:09:05.334 tests 2 2 2 0 0 00:09:05.334 asserts 22 22 22 0 n/a 00:09:05.334 00:09:05.334 Elapsed time = 0.000 seconds 00:09:05.334 [2024-04-27 04:51:35.184474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:05.334 00:09:05.334 real 0m1.293s 00:09:05.334 user 0m0.629s 00:09:05.334 sys 0m0.515s 00:09:05.334 04:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.334 ************************************ 00:09:05.334 04:51:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.334 END TEST unittest_nvme 00:09:05.334 ************************************ 00:09:05.594 04:51:35 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:05.594 04:51:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:05.594 04:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.594 04:51:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.594 ************************************ 00:09:05.594 START TEST unittest_log 00:09:05.594 ************************************ 00:09:05.594 04:51:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:05.594 00:09:05.594 00:09:05.594 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.594 http://cunit.sourceforge.net/ 00:09:05.594 00:09:05.594 00:09:05.594 Suite: log 00:09:05.594 Test: log_test ...[2024-04-27 04:51:35.274374] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:09:05.594 passed 00:09:05.594 Test: deprecation ...[2024-04-27 04:51:35.274874] log_ut.c: 55:log_test: *DEBUG*: log test 00:09:05.594 log dump test: 00:09:05.594 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:05.594 spdk dump test: 00:09:05.594 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:05.594 spdk dump test: 00:09:05.594 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:05.594 00000010 65 20 63 68 61 72 73 e chars 00:09:06.531 passed 00:09:06.531 00:09:06.531 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.531 suites 1 1 n/a 0 0 00:09:06.531 tests 2 2 2 0 0 00:09:06.531 asserts 73 73 73 0 n/a 00:09:06.531 00:09:06.531 Elapsed time = 0.001 seconds 00:09:06.531 00:09:06.531 real 0m1.032s 00:09:06.531 user 0m0.004s 00:09:06.531 sys 0m0.028s 00:09:06.531 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.531 ************************************ 00:09:06.531 END TEST unittest_log 00:09:06.531 ************************************ 00:09:06.531 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.531 04:51:36 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:06.531 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.531 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.531 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.531 ************************************ 00:09:06.531 START TEST unittest_lvol 00:09:06.531 ************************************ 00:09:06.531 04:51:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:06.531 00:09:06.531 00:09:06.531 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.531 http://cunit.sourceforge.net/ 00:09:06.531 00:09:06.531 00:09:06.531 Suite: lvol 00:09:06.531 Test: lvs_init_unload_success ...[2024-04-27 04:51:36.367222] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:06.531 passed 00:09:06.531 Test: lvs_init_destroy_success ...[2024-04-27 04:51:36.368042] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:06.531 passed 00:09:06.531 Test: lvs_init_opts_success ...passed 00:09:06.531 Test: lvs_unload_lvs_is_null_fail ...[2024-04-27 04:51:36.368685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:06.531 passed 00:09:06.531 Test: lvs_names ...[2024-04-27 04:51:36.369104] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:06.531 [2024-04-27 04:51:36.369462] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:06.531 [2024-04-27 04:51:36.369825] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:06.531 passed 00:09:06.531 Test: lvol_create_destroy_success ...passed 00:09:06.531 Test: lvol_create_fail ...[2024-04-27 04:51:36.370681] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:06.531 [2024-04-27 04:51:36.370797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:06.531 passed 00:09:06.531 Test: lvol_destroy_fail ...[2024-04-27 04:51:36.371415] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:06.531 passed 00:09:06.531 Test: lvol_close ...[2024-04-27 04:51:36.371864] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:06.531 [2024-04-27 04:51:36.371936] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:06.531 passed 00:09:06.531 Test: lvol_resize ...passed 00:09:06.532 Test: lvol_set_read_only ...passed 00:09:06.532 Test: test_lvs_load ...[2024-04-27 04:51:36.373245] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:06.532 [2024-04-27 04:51:36.373305] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:06.532 passed 00:09:06.532 Test: lvols_load ...[2024-04-27 04:51:36.373813] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:06.532 [2024-04-27 04:51:36.374049] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:06.532 passed 00:09:06.532 Test: lvol_open ...passed 00:09:06.532 Test: lvol_snapshot ...passed 00:09:06.532 Test: lvol_snapshot_fail ...[2024-04-27 04:51:36.375386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:06.532 passed 00:09:06.532 Test: lvol_clone ...passed 00:09:06.532 Test: lvol_clone_fail ...[2024-04-27 04:51:36.376399] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:06.532 passed 00:09:06.532 Test: lvol_iter_clones ...passed 00:09:06.532 Test: lvol_refcnt ...[2024-04-27 04:51:36.377296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 3bc15682-e977-4d12-b3ea-cc39911aff26 because it is still open 00:09:06.532 passed 00:09:06.532 Test: lvol_names ...[2024-04-27 04:51:36.377728] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:06.532 [2024-04-27 04:51:36.377884] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:06.532 [2024-04-27 04:51:36.378286] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:06.532 passed 00:09:06.532 Test: lvol_create_thin_provisioned ...passed 00:09:06.532 Test: lvol_rename ...[2024-04-27 04:51:36.378933] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:06.532 [2024-04-27 04:51:36.379222] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:06.532 passed 00:09:06.532 Test: lvs_rename ...[2024-04-27 04:51:36.379556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:06.532 passed 00:09:06.532 Test: lvol_inflate ...[2024-04-27 04:51:36.380070] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:06.532 passed 00:09:06.532 Test: lvol_decouple_parent ...[2024-04-27 04:51:36.380575] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:06.532 passed 00:09:06.532 Test: lvol_get_xattr ...passed 00:09:06.532 Test: lvol_esnap_reload ...passed 00:09:06.532 Test: lvol_esnap_create_bad_args ...[2024-04-27 04:51:36.381317] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:06.532 [2024-04-27 04:51:36.381561] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:06.532 [2024-04-27 04:51:36.381617] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:06.532 [2024-04-27 04:51:36.381939] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:06.532 [2024-04-27 04:51:36.382214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:06.532 passed 00:09:06.532 Test: lvol_esnap_create_delete ...passed 00:09:06.532 Test: lvol_esnap_load_esnaps ...[2024-04-27 04:51:36.382757] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:06.532 passed 00:09:06.532 Test: lvol_esnap_missing ...[2024-04-27 04:51:36.383208] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:06.532 [2024-04-27 04:51:36.383273] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:06.532 passed 00:09:06.532 Test: lvol_esnap_hotplug ... 00:09:06.532 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:06.532 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:06.532 [2024-04-27 04:51:36.384275] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b34b4be1-cd1d-4569-80ed-71f041f5e932: failed to create esnap bs_dev: error -12 00:09:06.532 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:06.532 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:06.532 [2024-04-27 04:51:36.384967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 77aa26bf-0665-45e1-8af1-133ceeb3dd05: failed to create esnap bs_dev: error -12 00:09:06.532 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:06.532 [2024-04-27 04:51:36.385336] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 722e3cb9-2b42-477c-b3da-5f2722f9baef: failed to create esnap bs_dev: error -12 00:09:06.532 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:06.532 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:06.532 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:06.532 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:06.532 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:06.532 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:06.532 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:06.532 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:06.532 passed 00:09:06.532 Test: lvol_get_by ...passed 00:09:06.532 00:09:06.532 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.532 suites 1 1 n/a 0 0 00:09:06.532 tests 34 34 34 0 0 00:09:06.532 asserts 1439 1439 1439 0 n/a 00:09:06.532 00:09:06.532 Elapsed time = 0.021 seconds 00:09:06.532 00:09:06.532 real 0m0.059s 00:09:06.532 user 0m0.028s 00:09:06.532 sys 0m0.031s 00:09:06.532 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.532 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.532 ************************************ 00:09:06.532 END TEST unittest_lvol 00:09:06.532 ************************************ 00:09:06.791 04:51:36 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:06.791 04:51:36 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:06.791 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.791 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.791 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.791 ************************************ 00:09:06.792 START TEST unittest_nvme_rdma 00:09:06.792 ************************************ 00:09:06.792 04:51:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:06.792 00:09:06.792 00:09:06.792 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.792 http://cunit.sourceforge.net/ 00:09:06.792 00:09:06.792 00:09:06.792 Suite: nvme_rdma 00:09:06.792 Test: test_nvme_rdma_build_sgl_request ...[2024-04-27 04:51:36.479204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:06.792 [2024-04-27 04:51:36.479933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:06.792 [2024-04-27 04:51:36.480054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:06.792 Test: test_nvme_rdma_build_contig_request ...[2024-04-27 04:51:36.480389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:06.792 Test: test_nvme_rdma_create_reqs ...[2024-04-27 04:51:36.480756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_create_rsps ...[2024-04-27 04:51:36.481360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-27 04:51:36.481884] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:06.792 [2024-04-27 04:51:36.481976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_poller_create ...passed 00:09:06.792 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-04-27 04:51:36.482416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_ctrlr_construct ...passed 00:09:06.792 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:06.792 Test: test_nvme_rdma_req_init ...passed 00:09:06.792 Test: test_nvme_rdma_validate_cm_event ...[2024-04-27 04:51:36.483055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:06.792 [2024-04-27 04:51:36.483116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_qpair_init ...passed 00:09:06.792 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:06.792 Test: test_nvme_rdma_memory_domain ...[2024-04-27 04:51:36.483685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:09:06.792 passed 00:09:06.792 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:06.792 Test: test_rdma_get_memory_translation ...[2024-04-27 04:51:36.484008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:06.792 [2024-04-27 04:51:36.484087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:06.792 passed 00:09:06.792 Test: test_get_rdma_qpair_from_wc ...passed 00:09:06.792 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:06.792 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-27 04:51:36.484675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:06.792 [2024-04-27 04:51:36.484836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:06.792 passed 00:09:06.792 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-27 04:51:36.485214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:06.792 [2024-04-27 04:51:36.485277] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:06.792 [2024-04-27 04:51:36.485647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff7bf31770 on poll group 0x60b0000001a0 00:09:06.792 [2024-04-27 04:51:36.485725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:06.792 [2024-04-27 04:51:36.485772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:06.792 [2024-04-27 04:51:36.486035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff7bf31770 on poll group 0x60b0000001a0 00:09:06.792 [2024-04-27 04:51:36.486133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:06.792 passed 00:09:06.792 00:09:06.792 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.792 suites 1 1 n/a 0 0 00:09:06.792 tests 22 22 22 0 0 00:09:06.792 asserts 412 412 412 0 n/a 00:09:06.792 00:09:06.792 Elapsed time = 0.007 seconds 00:09:06.792 00:09:06.792 real 0m0.037s 00:09:06.792 user 0m0.012s 00:09:06.792 sys 0m0.025s 00:09:06.792 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.792 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.792 ************************************ 00:09:06.792 END TEST unittest_nvme_rdma 00:09:06.792 ************************************ 00:09:06.792 04:51:36 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:06.792 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.792 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.792 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.792 ************************************ 00:09:06.792 START TEST unittest_nvmf_transport 00:09:06.792 ************************************ 00:09:06.792 04:51:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:06.792 00:09:06.792 00:09:06.792 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.792 http://cunit.sourceforge.net/ 00:09:06.792 00:09:06.792 00:09:06.792 Suite: nvmf 00:09:06.792 Test: test_spdk_nvmf_transport_create ...[2024-04-27 04:51:36.579115] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:06.792 [2024-04-27 04:51:36.580169] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:06.792 [2024-04-27 04:51:36.580401] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:06.792 [2024-04-27 04:51:36.580768] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:06.792 passed 00:09:06.792 Test: test_nvmf_transport_poll_group_create ...passed 00:09:06.792 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-27 04:51:36.581287] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:06.792 [2024-04-27 04:51:36.581563] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:06.792 [2024-04-27 04:51:36.581765] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:06.792 passed 00:09:06.792 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:06.792 00:09:06.792 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.792 suites 1 1 n/a 0 0 00:09:06.792 tests 4 4 4 0 0 00:09:06.792 asserts 49 49 49 0 n/a 00:09:06.792 00:09:06.792 Elapsed time = 0.002 seconds 00:09:06.792 00:09:06.792 real 0m0.045s 00:09:06.792 user 0m0.025s 00:09:06.792 sys 0m0.019s 00:09:06.792 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.792 ************************************ 00:09:06.792 END TEST unittest_nvmf_transport 00:09:06.792 ************************************ 00:09:06.792 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.792 04:51:36 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:06.792 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.792 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.792 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.792 ************************************ 00:09:06.792 START TEST unittest_rdma 00:09:06.792 ************************************ 00:09:06.792 04:51:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:06.792 00:09:06.792 00:09:06.792 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.792 http://cunit.sourceforge.net/ 00:09:06.792 00:09:06.792 00:09:06.792 Suite: rdma_common 00:09:06.792 Test: test_spdk_rdma_pd ...[2024-04-27 04:51:36.665573] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:06.792 passed 00:09:06.792 00:09:06.792 [2024-04-27 04:51:36.665960] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:06.792 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.792 suites 1 1 n/a 0 0 00:09:06.792 tests 1 1 1 0 0 00:09:06.792 asserts 31 31 31 0 n/a 00:09:06.792 00:09:06.792 Elapsed time = 0.001 seconds 00:09:06.792 00:09:06.792 real 0m0.028s 00:09:06.792 user 0m0.024s 00:09:06.792 sys 0m0.004s 00:09:06.792 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.792 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:06.792 ************************************ 00:09:06.792 END TEST unittest_rdma 00:09:06.792 ************************************ 00:09:07.052 04:51:36 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:07.052 04:51:36 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:07.052 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.052 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.052 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:07.052 ************************************ 00:09:07.052 START TEST unittest_nvme_cuse 00:09:07.052 ************************************ 00:09:07.052 04:51:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:07.052 00:09:07.052 00:09:07.052 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.052 http://cunit.sourceforge.net/ 00:09:07.052 00:09:07.052 00:09:07.052 Suite: nvme_cuse 00:09:07.052 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:07.052 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:07.052 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:07.052 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:07.052 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:07.052 Test: test_cuse_nvme_submit_io ...[2024-04-27 04:51:36.747448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:07.052 passed 00:09:07.052 Test: test_cuse_nvme_reset ...[2024-04-27 04:51:36.747782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:07.052 passed 00:09:07.052 Test: test_nvme_cuse_stop ...passed 00:09:07.052 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:07.052 00:09:07.052 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.052 suites 1 1 n/a 0 0 00:09:07.052 tests 9 9 9 0 0 00:09:07.052 asserts 121 121 121 0 n/a 00:09:07.052 00:09:07.052 Elapsed time = 0.002 seconds 00:09:07.052 00:09:07.052 real 0m0.030s 00:09:07.052 user 0m0.014s 00:09:07.052 sys 0m0.017s 00:09:07.052 04:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.052 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:07.052 ************************************ 00:09:07.052 END TEST unittest_nvme_cuse 00:09:07.052 ************************************ 00:09:07.052 04:51:36 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:09:07.052 04:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.052 04:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.052 04:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:07.052 ************************************ 00:09:07.052 START TEST unittest_nvmf 00:09:07.052 ************************************ 00:09:07.052 04:51:36 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:09:07.052 04:51:36 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:07.052 00:09:07.052 00:09:07.052 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.052 http://cunit.sourceforge.net/ 00:09:07.052 00:09:07.052 00:09:07.052 Suite: nvmf 00:09:07.052 Test: test_get_log_page ...passed 00:09:07.052 Test: test_process_fabrics_cmd ...passed 00:09:07.052 Test: test_connect ...[2024-04-27 04:51:36.827394] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:07.052 [2024-04-27 04:51:36.828321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:07.052 [2024-04-27 04:51:36.828436] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:07.052 [2024-04-27 04:51:36.828488] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:07.052 [2024-04-27 04:51:36.828531] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:07.052 [2024-04-27 04:51:36.828672] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:07.053 [2024-04-27 04:51:36.828714] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:07.053 [2024-04-27 04:51:36.828808] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:07.053 [2024-04-27 04:51:36.828853] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:07.053 [2024-04-27 04:51:36.828949] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:07.053 [2024-04-27 04:51:36.829029] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:07.053 [2024-04-27 04:51:36.829299] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:07.053 [2024-04-27 04:51:36.829391] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:07.053 [2024-04-27 04:51:36.829481] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:07.053 [2024-04-27 04:51:36.829546] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:07.053 [2024-04-27 04:51:36.829663] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:09:07.053 [2024-04-27 04:51:36.829779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:09:07.053 passed 00:09:07.053 Test: test_get_ns_id_desc_list ...passed 00:09:07.053 Test: test_identify_ns ...[2024-04-27 04:51:36.830026] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:07.053 [2024-04-27 04:51:36.830216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:07.053 [2024-04-27 04:51:36.830342] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:07.053 passed 00:09:07.053 Test: test_identify_ns_iocs_specific ...[2024-04-27 04:51:36.830491] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:07.053 [2024-04-27 04:51:36.830731] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:07.053 passed 00:09:07.053 Test: test_reservation_write_exclusive ...passed 00:09:07.053 Test: test_reservation_exclusive_access ...passed 00:09:07.053 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:07.053 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:07.053 Test: test_reservation_notification_log_page ...passed 00:09:07.053 Test: test_get_dif_ctx ...passed 00:09:07.053 Test: test_set_get_features ...[2024-04-27 04:51:36.831218] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:07.053 [2024-04-27 04:51:36.831279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:07.053 [2024-04-27 04:51:36.831326] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:07.053 [2024-04-27 04:51:36.831387] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:07.053 passed 00:09:07.053 Test: test_identify_ctrlr ...passed 00:09:07.053 Test: test_identify_ctrlr_iocs_specific ...passed 00:09:07.053 Test: test_custom_admin_cmd ...passed 00:09:07.053 Test: test_fused_compare_and_write ...[2024-04-27 04:51:36.831777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:07.053 passed 00:09:07.053 Test: test_multi_async_event_reqs ...[2024-04-27 04:51:36.831824] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:07.053 [2024-04-27 04:51:36.831884] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:07.053 passed 00:09:07.053 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:07.053 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:07.053 Test: test_multi_async_events ...passed 00:09:07.053 Test: test_rae ...passed 00:09:07.053 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:07.053 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:07.053 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-27 04:51:36.832343] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:09:07.053 passed 00:09:07.053 Test: test_zcopy_read ...passed 00:09:07.053 Test: test_zcopy_write ...passed 00:09:07.053 Test: test_nvmf_property_set ...passed 00:09:07.053 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-27 04:51:36.832529] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:07.053 [2024-04-27 04:51:36.832735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:07.053 passed 00:09:07.053 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:09:07.053 00:09:07.053 [2024-04-27 04:51:36.832792] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:07.053 [2024-04-27 04:51:36.832835] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:07.053 [2024-04-27 04:51:36.832870] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:07.053 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.053 suites 1 1 n/a 0 0 00:09:07.053 tests 30 30 30 0 0 00:09:07.053 asserts 885 885 885 0 n/a 00:09:07.053 00:09:07.053 Elapsed time = 0.006 seconds 00:09:07.053 04:51:36 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:07.053 00:09:07.053 00:09:07.053 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.053 http://cunit.sourceforge.net/ 00:09:07.053 00:09:07.053 00:09:07.053 Suite: nvmf 00:09:07.053 Test: test_get_rw_params ...passed 00:09:07.053 Test: test_lba_in_range ...passed 00:09:07.053 Test: test_get_dif_ctx ...passed 00:09:07.053 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:07.053 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-27 04:51:36.868525] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:07.053 [2024-04-27 04:51:36.868964] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:07.053 [2024-04-27 04:51:36.869124] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:07.053 passed 00:09:07.053 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-27 04:51:36.869205] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:07.053 [2024-04-27 04:51:36.869345] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:07.053 passed 00:09:07.053 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-27 04:51:36.869536] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:07.053 [2024-04-27 04:51:36.869595] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:07.053 [2024-04-27 04:51:36.869692] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:07.053 [2024-04-27 04:51:36.869749] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:07.053 passed 00:09:07.053 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:07.053 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:07.053 00:09:07.053 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.053 suites 1 1 n/a 0 0 00:09:07.053 tests 9 9 9 0 0 00:09:07.053 asserts 157 157 157 0 n/a 00:09:07.053 00:09:07.053 Elapsed time = 0.001 seconds 00:09:07.053 04:51:36 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:07.053 00:09:07.053 00:09:07.053 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.053 http://cunit.sourceforge.net/ 00:09:07.053 00:09:07.053 00:09:07.053 Suite: nvmf 00:09:07.053 Test: test_discovery_log ...passed 00:09:07.053 Test: test_discovery_log_with_filters ...passed 00:09:07.053 00:09:07.053 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.054 suites 1 1 n/a 0 0 00:09:07.054 tests 2 2 2 0 0 00:09:07.054 asserts 238 238 238 0 n/a 00:09:07.054 00:09:07.054 Elapsed time = 0.003 seconds 00:09:07.054 04:51:36 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:07.054 00:09:07.054 00:09:07.054 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.054 http://cunit.sourceforge.net/ 00:09:07.054 00:09:07.054 00:09:07.054 Suite: nvmf 00:09:07.054 Test: nvmf_test_create_subsystem ...[2024-04-27 04:51:36.946671] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:07.054 [2024-04-27 04:51:36.947090] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:07.054 [2024-04-27 04:51:36.947206] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:07.054 [2024-04-27 04:51:36.947249] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:07.054 [2024-04-27 04:51:36.947283] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:07.054 [2024-04-27 04:51:36.947328] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:07.054 [2024-04-27 04:51:36.947430] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:07.054 [2024-04-27 04:51:36.947599] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:07.054 [2024-04-27 04:51:36.947699] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:07.054 [2024-04-27 04:51:36.947741] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:07.054 [2024-04-27 04:51:36.947779] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:07.054 passed 00:09:07.054 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-27 04:51:36.947984] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:07.054 [2024-04-27 04:51:36.948108] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:07.314 passed 00:09:07.314 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:07.314 Test: test_reservation_register ...[2024-04-27 04:51:36.948376] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 [2024-04-27 04:51:36.948518] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:07.314 passed 00:09:07.314 Test: test_reservation_register_with_ptpl ...passed 00:09:07.314 Test: test_reservation_acquire_preempt_1 ...[2024-04-27 04:51:36.949531] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:07.314 Test: test_reservation_release ...[2024-04-27 04:51:36.951065] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_unregister_notification ...[2024-04-27 04:51:36.951303] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_release_notification ...[2024-04-27 04:51:36.951507] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_release_notification_write_exclusive ...[2024-04-27 04:51:36.951722] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_clear_notification ...[2024-04-27 04:51:36.951914] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_reservation_preempt_notification ...[2024-04-27 04:51:36.952127] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:07.314 passed 00:09:07.314 Test: test_spdk_nvmf_ns_event ...passed 00:09:07.314 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:07.314 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:07.314 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-27 04:51:36.952814] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_ns_reservation_report ...passed 00:09:07.314 Test: test_nvmf_nqn_is_valid ...[2024-04-27 04:51:36.952913] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:09:07.314 [2024-04-27 04:51:36.953032] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3146:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:07.314 [2024-04-27 04:51:36.953115] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_ns_reservation_restore ...passed 00:09:07.314 Test: test_nvmf_subsystem_state_change ...[2024-04-27 04:51:36.953155] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:226d4a51-ab33-41da-a15d-44d4b658c75": uuid is not the correct length 00:09:07.314 [2024-04-27 04:51:36.953195] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:07.314 [2024-04-27 04:51:36.953295] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_reservation_custom_ops ...passed 00:09:07.314 00:09:07.314 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.314 suites 1 1 n/a 0 0 00:09:07.314 tests 22 22 22 0 0 00:09:07.314 asserts 405 405 405 0 n/a 00:09:07.314 00:09:07.314 Elapsed time = 0.008 seconds 00:09:07.314 04:51:36 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:07.314 00:09:07.314 00:09:07.314 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.314 http://cunit.sourceforge.net/ 00:09:07.314 00:09:07.314 00:09:07.314 Suite: nvmf 00:09:07.314 Test: test_nvmf_tcp_create ...[2024-04-27 04:51:37.032305] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_tcp_destroy ...passed 00:09:07.314 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:07.314 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:07.314 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:07.314 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:07.314 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:07.314 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-27 04:51:37.171017] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:07.314 Test: test_nvmf_tcp_icreq_handle ...[2024-04-27 04:51:37.171153] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171277] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.314 [2024-04-27 04:51:37.171373] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:07.314 [2024-04-27 04:51:37.171608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.314 [2024-04-27 04:51:37.171719] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171763] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:07.314 [2024-04-27 04:51:37.171809] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171849] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.314 [2024-04-27 04:51:37.171895] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.171935] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:07.314 passed 00:09:07.314 Test: test_nvmf_tcp_check_xfer_type ...passed 00:09:07.314 Test: test_nvmf_tcp_invalid_sgl ...passed 00:09:07.314 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-27 04:51:37.172010] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.314 [2024-04-27 04:51:37.172111] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:07.314 [2024-04-27 04:51:37.172169] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.314 [2024-04-27 04:51:37.172226] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af799f0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.172302] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd4af7a750 00:09:07.315 [2024-04-27 04:51:37.172418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.172486] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.172540] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd4af79eb0 00:09:07.315 [2024-04-27 04:51:37.172606] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.172655] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.172696] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:07.315 [2024-04-27 04:51:37.172742] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.172808] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.172867] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:07.315 [2024-04-27 04:51:37.172911] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.172959] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173023] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173078] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173145] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173303] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173346] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173385] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173452] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173507] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 [2024-04-27 04:51:37.173564] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:07.315 [2024-04-27 04:51:37.173614] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4af79eb0 is same with the state(5) to be set 00:09:07.315 passed 00:09:07.315 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:09:07.315 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-27 04:51:37.202512] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:07.315 passed 00:09:07.315 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-27 04:51:37.202655] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:07.315 [2024-04-27 04:51:37.203156] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:07.315 passed 00:09:07.315 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-27 04:51:37.203224] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:07.315 [2024-04-27 04:51:37.203494] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:07.315 [2024-04-27 04:51:37.203544] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:07.315 passed 00:09:07.315 00:09:07.315 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.315 suites 1 1 n/a 0 0 00:09:07.315 tests 17 17 17 0 0 00:09:07.315 asserts 222 222 222 0 n/a 00:09:07.315 00:09:07.315 Elapsed time = 0.204 seconds 00:09:07.574 04:51:37 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:07.574 00:09:07.574 00:09:07.574 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.574 http://cunit.sourceforge.net/ 00:09:07.574 00:09:07.574 00:09:07.574 Suite: nvmf 00:09:07.574 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:07.574 00:09:07.574 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.574 suites 1 1 n/a 0 0 00:09:07.574 tests 1 1 1 0 0 00:09:07.574 asserts 17 17 17 0 n/a 00:09:07.574 00:09:07.574 Elapsed time = 0.026 seconds 00:09:07.574 00:09:07.574 real 0m0.571s 00:09:07.574 user 0m0.252s 00:09:07.574 sys 0m0.321s 00:09:07.574 04:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.574 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.574 ************************************ 00:09:07.574 END TEST unittest_nvmf 00:09:07.574 ************************************ 00:09:07.574 04:51:37 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:07.574 04:51:37 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:07.574 04:51:37 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:07.574 04:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.574 04:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.574 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.574 ************************************ 00:09:07.574 START TEST unittest_nvmf_rdma 00:09:07.574 ************************************ 00:09:07.574 04:51:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:07.574 00:09:07.574 00:09:07.574 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.574 http://cunit.sourceforge.net/ 00:09:07.574 00:09:07.574 00:09:07.574 Suite: nvmf 00:09:07.574 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-27 04:51:37.463514] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:07.574 [2024-04-27 04:51:37.464547] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:07.574 [2024-04-27 04:51:37.464796] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:07.574 passed 00:09:07.574 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:07.574 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:07.574 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:07.574 Test: test_nvmf_rdma_opts_init ...passed 00:09:07.574 Test: test_nvmf_rdma_request_free_data ...passed 00:09:07.574 Test: test_nvmf_rdma_update_ibv_state ...[2024-04-27 04:51:37.466337] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:09:07.574 passed 00:09:07.574 Test: test_nvmf_rdma_resources_create ...[2024-04-27 04:51:37.466550] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:09:07.574 passed 00:09:07.574 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:07.574 Test: test_nvmf_rdma_resize_cq ...[2024-04-27 04:51:37.468193] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:07.574 Using CQ of insufficient size may lead to CQ overrun 00:09:07.575 [2024-04-27 04:51:37.468501] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:07.575 [2024-04-27 04:51:37.468791] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:07.575 passed 00:09:07.575 00:09:07.575 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.575 suites 1 1 n/a 0 0 00:09:07.575 tests 10 10 10 0 0 00:09:07.575 asserts 584 584 584 0 n/a 00:09:07.575 00:09:07.575 Elapsed time = 0.004 seconds 00:09:07.834 00:09:07.834 real 0m0.043s 00:09:07.834 user 0m0.020s 00:09:07.834 sys 0m0.022s 00:09:07.834 04:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.834 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.834 ************************************ 00:09:07.834 END TEST unittest_nvmf_rdma 00:09:07.834 ************************************ 00:09:07.834 04:51:37 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:07.834 04:51:37 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:09:07.834 04:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.834 04:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.834 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.834 ************************************ 00:09:07.834 START TEST unittest_scsi 00:09:07.834 ************************************ 00:09:07.834 04:51:37 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:09:07.834 04:51:37 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:07.834 00:09:07.834 00:09:07.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.834 http://cunit.sourceforge.net/ 00:09:07.834 00:09:07.834 00:09:07.834 Suite: dev_suite 00:09:07.834 Test: dev_destruct_null_dev ...passed 00:09:07.834 Test: dev_destruct_zero_luns ...passed 00:09:07.834 Test: dev_destruct_null_lun ...passed 00:09:07.834 Test: dev_destruct_success ...passed 00:09:07.834 Test: dev_construct_num_luns_zero ...[2024-04-27 04:51:37.549767] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:07.834 passed 00:09:07.834 Test: dev_construct_no_lun_zero ...[2024-04-27 04:51:37.550227] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:07.834 passed 00:09:07.834 Test: dev_construct_null_lun ...passed 00:09:07.834 Test: dev_construct_name_too_long ...[2024-04-27 04:51:37.550291] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:07.834 [2024-04-27 04:51:37.550345] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:07.834 passed 00:09:07.834 Test: dev_construct_success ...passed 00:09:07.834 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:07.834 Test: dev_queue_mgmt_task_success ...passed 00:09:07.834 Test: dev_queue_task_success ...passed 00:09:07.834 Test: dev_stop_success ...passed 00:09:07.834 Test: dev_add_port_max_ports ...[2024-04-27 04:51:37.550751] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:07.834 passed 00:09:07.834 Test: dev_add_port_construct_failure1 ...[2024-04-27 04:51:37.550875] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:07.834 passed 00:09:07.834 Test: dev_add_port_construct_failure2 ...passed 00:09:07.834 Test: dev_add_port_success1 ...passed 00:09:07.834 Test: dev_add_port_success2 ...passed[2024-04-27 04:51:37.551000] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:07.834 00:09:07.834 Test: dev_add_port_success3 ...passed 00:09:07.834 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:07.834 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:07.834 Test: dev_find_port_by_id_success ...passed 00:09:07.834 Test: dev_add_lun_bdev_not_found ...passed 00:09:07.834 Test: dev_add_lun_no_free_lun_id ...passed 00:09:07.834 Test: dev_add_lun_success1 ...passed 00:09:07.834 Test: dev_add_lun_success2 ...[2024-04-27 04:51:37.551609] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:07.834 passed 00:09:07.834 Test: dev_check_pending_tasks ...passed 00:09:07.834 Test: dev_iterate_luns ...passed 00:09:07.834 Test: dev_find_free_lun ...passed 00:09:07.834 00:09:07.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.834 suites 1 1 n/a 0 0 00:09:07.834 tests 29 29 29 0 0 00:09:07.834 asserts 97 97 97 0 n/a 00:09:07.834 00:09:07.834 Elapsed time = 0.003 seconds 00:09:07.834 04:51:37 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:07.834 00:09:07.834 00:09:07.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.834 http://cunit.sourceforge.net/ 00:09:07.834 00:09:07.834 00:09:07.834 Suite: lun_suite 00:09:07.834 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-27 04:51:37.587318] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:07.834 passed 00:09:07.834 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-27 04:51:37.587786] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:07.834 passed 00:09:07.834 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:07.834 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:07.834 Test: lun_task_mgmt_execute_invalid_case ...passed 00:09:07.834 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:07.834 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...[2024-04-27 04:51:37.587999] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:07.834 passed 00:09:07.834 Test: lun_append_task_null_lun_not_supported ...passed 00:09:07.834 Test: lun_execute_scsi_task_pending ...passed 00:09:07.834 Test: lun_execute_scsi_task_complete ...passed 00:09:07.834 Test: lun_execute_scsi_task_resize ...passed 00:09:07.834 Test: lun_destruct_success ...passed 00:09:07.834 Test: lun_construct_null_ctx ...[2024-04-27 04:51:37.588218] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:07.834 passed 00:09:07.834 Test: lun_construct_success ...passed 00:09:07.834 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:07.834 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:07.834 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:07.834 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:07.834 00:09:07.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.834 suites 1 1 n/a 0 0 00:09:07.834 tests 18 18 18 0 0 00:09:07.834 asserts 153 153 153 0 n/a 00:09:07.834 00:09:07.834 Elapsed time = 0.001 seconds 00:09:07.834 04:51:37 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:07.834 00:09:07.834 00:09:07.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.834 http://cunit.sourceforge.net/ 00:09:07.834 00:09:07.834 00:09:07.834 Suite: scsi_suite 00:09:07.834 Test: scsi_init ...passed 00:09:07.834 00:09:07.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.834 suites 1 1 n/a 0 0 00:09:07.834 tests 1 1 1 0 0 00:09:07.834 asserts 1 1 1 0 n/a 00:09:07.834 00:09:07.834 Elapsed time = 0.000 seconds 00:09:07.834 04:51:37 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:07.834 00:09:07.834 00:09:07.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.834 http://cunit.sourceforge.net/ 00:09:07.834 00:09:07.834 00:09:07.834 Suite: translation_suite 00:09:07.834 Test: mode_select_6_test ...passed 00:09:07.834 Test: mode_select_6_test2 ...passed 00:09:07.834 Test: mode_sense_6_test ...passed 00:09:07.834 Test: mode_sense_10_test ...passed 00:09:07.834 Test: inquiry_evpd_test ...passed 00:09:07.834 Test: inquiry_standard_test ...passed 00:09:07.834 Test: inquiry_overflow_test ...passed 00:09:07.834 Test: task_complete_test ...passed 00:09:07.834 Test: lba_range_test ...passed 00:09:07.834 Test: xfer_len_test ...[2024-04-27 04:51:37.657911] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:07.834 passed 00:09:07.834 Test: xfer_test ...passed 00:09:07.834 Test: scsi_name_padding_test ...passed 00:09:07.834 Test: get_dif_ctx_test ...passed 00:09:07.834 Test: unmap_split_test ...passed 00:09:07.834 00:09:07.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.834 suites 1 1 n/a 0 0 00:09:07.834 tests 14 14 14 0 0 00:09:07.834 asserts 1200 1200 1200 0 n/a 00:09:07.834 00:09:07.834 Elapsed time = 0.005 seconds 00:09:07.834 04:51:37 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:07.834 00:09:07.834 00:09:07.834 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.834 http://cunit.sourceforge.net/ 00:09:07.834 00:09:07.834 00:09:07.834 Suite: reservation_suite 00:09:07.834 Test: test_reservation_register ...[2024-04-27 04:51:37.687673] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 passed 00:09:07.834 Test: test_reservation_reserve ...[2024-04-27 04:51:37.688027] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 [2024-04-27 04:51:37.688097] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:07.834 [2024-04-27 04:51:37.688191] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:07.834 passed 00:09:07.834 Test: test_reservation_preempt_non_all_regs ...[2024-04-27 04:51:37.688250] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 [2024-04-27 04:51:37.688316] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:07.834 passed 00:09:07.834 Test: test_reservation_preempt_all_regs ...[2024-04-27 04:51:37.688441] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 passed 00:09:07.834 Test: test_reservation_cmds_conflict ...[2024-04-27 04:51:37.688574] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 [2024-04-27 04:51:37.688641] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:07.834 [2024-04-27 04:51:37.688682] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:07.834 [2024-04-27 04:51:37.688715] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:07.834 [2024-04-27 04:51:37.688756] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:07.834 [2024-04-27 04:51:37.688786] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:07.834 passed 00:09:07.834 Test: test_scsi2_reserve_release ...passed 00:09:07.834 Test: test_pr_with_scsi2_reserve_release ...[2024-04-27 04:51:37.688873] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:07.834 passed 00:09:07.834 00:09:07.834 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.834 suites 1 1 n/a 0 0 00:09:07.834 tests 7 7 7 0 0 00:09:07.834 asserts 257 257 257 0 n/a 00:09:07.834 00:09:07.834 Elapsed time = 0.001 seconds 00:09:07.834 00:09:07.834 real 0m0.163s 00:09:07.834 user 0m0.086s 00:09:07.834 sys 0m0.079s 00:09:07.835 04:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.835 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 ************************************ 00:09:07.835 END TEST unittest_scsi 00:09:07.835 ************************************ 00:09:08.095 04:51:37 -- unit/unittest.sh@276 -- # uname -s 00:09:08.095 04:51:37 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:09:08.095 04:51:37 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:09:08.095 04:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.095 04:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.095 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 ************************************ 00:09:08.095 START TEST unittest_sock 00:09:08.095 ************************************ 00:09:08.095 04:51:37 -- common/autotest_common.sh@1104 -- # unittest_sock 00:09:08.095 04:51:37 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:08.095 00:09:08.095 00:09:08.095 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.095 http://cunit.sourceforge.net/ 00:09:08.095 00:09:08.095 00:09:08.095 Suite: sock 00:09:08.095 Test: posix_sock ...passed 00:09:08.095 Test: ut_sock ...passed 00:09:08.095 Test: posix_sock_group ...passed 00:09:08.095 Test: ut_sock_group ...passed 00:09:08.095 Test: posix_sock_group_fairness ...passed 00:09:08.095 Test: _posix_sock_close ...passed 00:09:08.095 Test: sock_get_default_opts ...passed 00:09:08.095 Test: ut_sock_impl_get_set_opts ...passed 00:09:08.095 Test: posix_sock_impl_get_set_opts ...passed 00:09:08.095 Test: ut_sock_map ...passed 00:09:08.095 Test: override_impl_opts ...passed 00:09:08.095 Test: ut_sock_group_get_ctx ...passed 00:09:08.095 00:09:08.095 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.095 suites 1 1 n/a 0 0 00:09:08.095 tests 12 12 12 0 0 00:09:08.095 asserts 349 349 349 0 n/a 00:09:08.095 00:09:08.095 Elapsed time = 0.009 seconds 00:09:08.095 04:51:37 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:08.095 00:09:08.095 00:09:08.095 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.095 http://cunit.sourceforge.net/ 00:09:08.095 00:09:08.095 00:09:08.095 Suite: posix 00:09:08.095 Test: flush ...passed 00:09:08.095 00:09:08.095 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.095 suites 1 1 n/a 0 0 00:09:08.095 tests 1 1 1 0 0 00:09:08.095 asserts 28 28 28 0 n/a 00:09:08.095 00:09:08.095 Elapsed time = 0.000 seconds 00:09:08.095 04:51:37 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:08.095 00:09:08.095 real 0m0.108s 00:09:08.095 user 0m0.039s 00:09:08.095 sys 0m0.046s 00:09:08.095 04:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.095 ************************************ 00:09:08.095 END TEST unittest_sock 00:09:08.095 ************************************ 00:09:08.095 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 04:51:37 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:08.095 04:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.095 04:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.095 04:51:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 ************************************ 00:09:08.095 START TEST unittest_thread 00:09:08.095 ************************************ 00:09:08.095 04:51:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:08.095 00:09:08.095 00:09:08.095 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.095 http://cunit.sourceforge.net/ 00:09:08.095 00:09:08.095 00:09:08.095 Suite: io_channel 00:09:08.095 Test: thread_alloc ...passed 00:09:08.095 Test: thread_send_msg ...passed 00:09:08.095 Test: thread_poller ...passed 00:09:08.095 Test: poller_pause ...passed 00:09:08.095 Test: thread_for_each ...passed 00:09:08.095 Test: for_each_channel_remove ...passed 00:09:08.095 Test: for_each_channel_unreg ...[2024-04-27 04:51:37.951048] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffc1fc4e810 already registered (old:0x613000000200 new:0x6130000003c0) 00:09:08.095 passed 00:09:08.095 Test: thread_name ...passed 00:09:08.095 Test: channel ...[2024-04-27 04:51:37.957320] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5620161720e0 00:09:08.095 passed 00:09:08.095 Test: channel_destroy_races ...passed 00:09:08.095 Test: thread_exit_test ...[2024-04-27 04:51:37.964644] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:09:08.095 passed 00:09:08.095 Test: thread_update_stats_test ...passed 00:09:08.095 Test: nested_channel ...passed 00:09:08.095 Test: device_unregister_and_thread_exit_race ...passed 00:09:08.095 Test: cache_closest_timed_poller ...passed 00:09:08.095 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:08.095 Test: io_device_lookup ...passed 00:09:08.095 Test: spdk_spin ...[2024-04-27 04:51:37.980129] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:08.095 [2024-04-27 04:51:37.980284] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1fc4e800 00:09:08.095 [2024-04-27 04:51:37.980993] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:08.095 [2024-04-27 04:51:37.983228] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:08.095 [2024-04-27 04:51:37.983356] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1fc4e800 00:09:08.095 [2024-04-27 04:51:37.983415] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:08.095 [2024-04-27 04:51:37.983481] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1fc4e800 00:09:08.095 [2024-04-27 04:51:37.983949] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:08.095 [2024-04-27 04:51:37.984045] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1fc4e800 00:09:08.095 [2024-04-27 04:51:37.984099] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:08.095 [2024-04-27 04:51:37.984512] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1fc4e800 00:09:08.095 passed 00:09:08.095 Test: for_each_channel_and_thread_exit_race ...passed 00:09:08.355 Test: for_each_thread_and_thread_exit_race ...passed 00:09:08.355 00:09:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.355 suites 1 1 n/a 0 0 00:09:08.355 tests 20 20 20 0 0 00:09:08.355 asserts 409 409 409 0 n/a 00:09:08.355 00:09:08.355 Elapsed time = 0.071 seconds 00:09:08.355 00:09:08.355 real 0m0.117s 00:09:08.355 user 0m0.069s 00:09:08.355 sys 0m0.047s 00:09:08.355 04:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.355 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.355 ************************************ 00:09:08.355 END TEST unittest_thread 00:09:08.355 ************************************ 00:09:08.355 04:51:38 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:08.355 04:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.355 04:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.355 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.355 ************************************ 00:09:08.355 START TEST unittest_iobuf 00:09:08.355 ************************************ 00:09:08.355 04:51:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:08.355 00:09:08.355 00:09:08.355 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.355 http://cunit.sourceforge.net/ 00:09:08.355 00:09:08.355 00:09:08.355 Suite: io_channel 00:09:08.355 Test: iobuf ...passed 00:09:08.355 Test: iobuf_cache ...[2024-04-27 04:51:38.092666] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:08.355 [2024-04-27 04:51:38.093200] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:08.355 [2024-04-27 04:51:38.093504] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:08.355 [2024-04-27 04:51:38.093681] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:08.355 [2024-04-27 04:51:38.093883] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:08.355 [2024-04-27 04:51:38.094059] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:08.355 passed 00:09:08.355 00:09:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.355 suites 1 1 n/a 0 0 00:09:08.355 tests 2 2 2 0 0 00:09:08.355 asserts 107 107 107 0 n/a 00:09:08.355 00:09:08.355 Elapsed time = 0.007 seconds 00:09:08.355 00:09:08.355 real 0m0.043s 00:09:08.355 user 0m0.023s 00:09:08.355 sys 0m0.019s 00:09:08.355 04:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.355 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.355 ************************************ 00:09:08.355 END TEST unittest_iobuf 00:09:08.355 ************************************ 00:09:08.355 04:51:38 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:09:08.355 04:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.355 04:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.355 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:08.355 ************************************ 00:09:08.355 START TEST unittest_util 00:09:08.355 ************************************ 00:09:08.355 04:51:38 -- common/autotest_common.sh@1104 -- # unittest_util 00:09:08.355 04:51:38 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:08.355 00:09:08.355 00:09:08.355 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.355 http://cunit.sourceforge.net/ 00:09:08.355 00:09:08.355 00:09:08.355 Suite: base64 00:09:08.355 Test: test_base64_get_encoded_strlen ...passed 00:09:08.355 Test: test_base64_get_decoded_len ...passed 00:09:08.355 Test: test_base64_encode ...passed 00:09:08.355 Test: test_base64_decode ...passed 00:09:08.355 Test: test_base64_urlsafe_encode ...passed 00:09:08.355 Test: test_base64_urlsafe_decode ...passed 00:09:08.355 00:09:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.355 suites 1 1 n/a 0 0 00:09:08.355 tests 6 6 6 0 0 00:09:08.355 asserts 112 112 112 0 n/a 00:09:08.355 00:09:08.355 Elapsed time = 0.000 seconds 00:09:08.355 04:51:38 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:08.355 00:09:08.355 00:09:08.355 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.355 http://cunit.sourceforge.net/ 00:09:08.355 00:09:08.355 00:09:08.355 Suite: bit_array 00:09:08.355 Test: test_1bit ...passed 00:09:08.355 Test: test_64bit ...passed 00:09:08.355 Test: test_find ...passed 00:09:08.355 Test: test_resize ...passed 00:09:08.355 Test: test_errors ...passed 00:09:08.355 Test: test_count ...passed 00:09:08.355 Test: test_mask_store_load ...passed 00:09:08.355 Test: test_mask_clear ...passed 00:09:08.355 00:09:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.355 suites 1 1 n/a 0 0 00:09:08.355 tests 8 8 8 0 0 00:09:08.355 asserts 5075 5075 5075 0 n/a 00:09:08.355 00:09:08.355 Elapsed time = 0.002 seconds 00:09:08.355 04:51:38 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:08.355 00:09:08.355 00:09:08.355 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.355 http://cunit.sourceforge.net/ 00:09:08.355 00:09:08.355 00:09:08.355 Suite: cpuset 00:09:08.355 Test: test_cpuset ...passed 00:09:08.355 Test: test_cpuset_parse ...[2024-04-27 04:51:38.227616] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:08.355 [2024-04-27 04:51:38.228013] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:08.355 [2024-04-27 04:51:38.228149] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:08.355 [2024-04-27 04:51:38.228283] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:08.355 [2024-04-27 04:51:38.228351] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:08.355 [2024-04-27 04:51:38.228424] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:08.355 [2024-04-27 04:51:38.228489] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:08.355 [2024-04-27 04:51:38.228594] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:08.355 passed 00:09:08.355 Test: test_cpuset_fmt ...passed 00:09:08.355 00:09:08.355 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.355 suites 1 1 n/a 0 0 00:09:08.355 tests 3 3 3 0 0 00:09:08.355 asserts 65 65 65 0 n/a 00:09:08.355 00:09:08.355 Elapsed time = 0.002 seconds 00:09:08.355 04:51:38 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: crc16 00:09:08.615 Test: test_crc16_t10dif ...passed 00:09:08.615 Test: test_crc16_t10dif_seed ...passed 00:09:08.615 Test: test_crc16_t10dif_copy ...passed 00:09:08.615 00:09:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.615 suites 1 1 n/a 0 0 00:09:08.615 tests 3 3 3 0 0 00:09:08.615 asserts 5 5 5 0 n/a 00:09:08.615 00:09:08.615 Elapsed time = 0.000 seconds 00:09:08.615 04:51:38 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: crc32_ieee 00:09:08.615 Test: test_crc32_ieee ...passed 00:09:08.615 00:09:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.615 suites 1 1 n/a 0 0 00:09:08.615 tests 1 1 1 0 0 00:09:08.615 asserts 1 1 1 0 n/a 00:09:08.615 00:09:08.615 Elapsed time = 0.000 seconds 00:09:08.615 04:51:38 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: crc32c 00:09:08.615 Test: test_crc32c ...passed 00:09:08.615 Test: test_crc32c_nvme ...passed 00:09:08.615 00:09:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.615 suites 1 1 n/a 0 0 00:09:08.615 tests 2 2 2 0 0 00:09:08.615 asserts 16 16 16 0 n/a 00:09:08.615 00:09:08.615 Elapsed time = 0.000 seconds 00:09:08.615 04:51:38 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: crc64 00:09:08.615 Test: test_crc64_nvme ...passed 00:09:08.615 00:09:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.615 suites 1 1 n/a 0 0 00:09:08.615 tests 1 1 1 0 0 00:09:08.615 asserts 4 4 4 0 n/a 00:09:08.615 00:09:08.615 Elapsed time = 0.000 seconds 00:09:08.615 04:51:38 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: string 00:09:08.615 Test: test_parse_ip_addr ...passed 00:09:08.615 Test: test_str_chomp ...passed 00:09:08.615 Test: test_parse_capacity ...passed 00:09:08.615 Test: test_sprintf_append_realloc ...passed 00:09:08.615 Test: test_strtol ...passed 00:09:08.615 Test: test_strtoll ...passed 00:09:08.615 Test: test_strarray ...passed 00:09:08.615 Test: test_strcpy_replace ...passed 00:09:08.615 00:09:08.615 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.615 suites 1 1 n/a 0 0 00:09:08.615 tests 8 8 8 0 0 00:09:08.615 asserts 161 161 161 0 n/a 00:09:08.615 00:09:08.615 Elapsed time = 0.001 seconds 00:09:08.615 04:51:38 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:08.615 00:09:08.615 00:09:08.615 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.615 http://cunit.sourceforge.net/ 00:09:08.615 00:09:08.615 00:09:08.615 Suite: dif 00:09:08.615 Test: dif_generate_and_verify_test ...[2024-04-27 04:51:38.380177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:08.615 [2024-04-27 04:51:38.380810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:08.616 [2024-04-27 04:51:38.381181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:08.616 [2024-04-27 04:51:38.381546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:08.616 [2024-04-27 04:51:38.381883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:08.616 [2024-04-27 04:51:38.382224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:08.616 passed 00:09:08.616 Test: dif_disable_check_test ...[2024-04-27 04:51:38.383324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:08.616 [2024-04-27 04:51:38.383748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:08.616 [2024-04-27 04:51:38.384082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:08.616 passed 00:09:08.616 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-27 04:51:38.385214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:08.616 [2024-04-27 04:51:38.385569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:08.616 [2024-04-27 04:51:38.385935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:08.616 [2024-04-27 04:51:38.386357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:08.616 [2024-04-27 04:51:38.386744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:08.616 [2024-04-27 04:51:38.387122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:08.616 [2024-04-27 04:51:38.387484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:08.616 [2024-04-27 04:51:38.387836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:08.616 [2024-04-27 04:51:38.388187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:08.616 [2024-04-27 04:51:38.388721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:08.616 [2024-04-27 04:51:38.389105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:08.616 passed 00:09:08.616 Test: dif_apptag_mask_test ...[2024-04-27 04:51:38.389479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:08.616 [2024-04-27 04:51:38.389831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:08.616 passed 00:09:08.616 Test: dif_sec_512_md_0_error_test ...[2024-04-27 04:51:38.390077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:08.616 passed 00:09:08.616 Test: dif_sec_4096_md_0_error_test ...[2024-04-27 04:51:38.390147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:08.616 passed 00:09:08.616 Test: dif_sec_4100_md_128_error_test ...[2024-04-27 04:51:38.390223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:08.616 [2024-04-27 04:51:38.390318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:08.616 [2024-04-27 04:51:38.390384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:08.616 passed 00:09:08.616 Test: dif_guard_seed_test ...passed 00:09:08.616 Test: dif_guard_value_test ...passed 00:09:08.616 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:08.616 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:08.616 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 04:51:38.435308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4d, Actual=fd4c 00:09:08.616 [2024-04-27 04:51:38.437862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe20, Actual=fe21 00:09:08.616 [2024-04-27 04:51:38.440395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.442920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.445475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.616 [2024-04-27 04:51:38.447976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.616 [2024-04-27 04:51:38.450511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=9ca5 00:09:08.616 [2024-04-27 04:51:38.452598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=cfc7 00:09:08.616 [2024-04-27 04:51:38.454679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ec, Actual=1ab753ed 00:09:08.616 [2024-04-27 04:51:38.457203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574661, Actual=38574660 00:09:08.616 [2024-04-27 04:51:38.459757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.462265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.464794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.616 [2024-04-27 04:51:38.467300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.616 [2024-04-27 04:51:38.469832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=6ee92b 00:09:08.616 [2024-04-27 04:51:38.471928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=86abe27d 00:09:08.616 [2024-04-27 04:51:38.474059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.616 [2024-04-27 04:51:38.476597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.616 [2024-04-27 04:51:38.479114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.481636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.484138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.616 [2024-04-27 04:51:38.486676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.616 [2024-04-27 04:51:38.489267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.616 [2024-04-27 04:51:38.491381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.616 passed 00:09:08.616 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-27 04:51:38.492619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.616 [2024-04-27 04:51:38.492970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:09:08.616 [2024-04-27 04:51:38.493328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.493684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.494077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.616 [2024-04-27 04:51:38.494421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.616 [2024-04-27 04:51:38.494786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.616 [2024-04-27 04:51:38.495108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cfc7 00:09:08.616 [2024-04-27 04:51:38.495442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.616 [2024-04-27 04:51:38.495790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:09:08.616 [2024-04-27 04:51:38.496176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.496517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.616 [2024-04-27 04:51:38.496898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.616 [2024-04-27 04:51:38.497242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.616 [2024-04-27 04:51:38.497584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.616 [2024-04-27 04:51:38.497906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=86abe27d 00:09:08.617 [2024-04-27 04:51:38.498258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.617 [2024-04-27 04:51:38.498630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.617 [2024-04-27 04:51:38.498994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.499340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.499693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.617 [2024-04-27 04:51:38.500036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.617 [2024-04-27 04:51:38.500405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.617 [2024-04-27 04:51:38.500763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.617 passed 00:09:08.617 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-27 04:51:38.501157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.617 [2024-04-27 04:51:38.501504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:09:08.617 [2024-04-27 04:51:38.501855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.502200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.502581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.617 [2024-04-27 04:51:38.502935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.617 [2024-04-27 04:51:38.503284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.617 [2024-04-27 04:51:38.503605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cfc7 00:09:08.617 [2024-04-27 04:51:38.503928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.617 [2024-04-27 04:51:38.504277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:09:08.617 [2024-04-27 04:51:38.504640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.505012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.505362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.617 [2024-04-27 04:51:38.505705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.617 [2024-04-27 04:51:38.506050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.617 [2024-04-27 04:51:38.506388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=86abe27d 00:09:08.617 [2024-04-27 04:51:38.506758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.617 [2024-04-27 04:51:38.507107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.617 [2024-04-27 04:51:38.507450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.507800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.508155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.617 [2024-04-27 04:51:38.508499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.617 [2024-04-27 04:51:38.508918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.617 [2024-04-27 04:51:38.509237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.617 passed 00:09:08.617 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-27 04:51:38.509651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.617 [2024-04-27 04:51:38.510021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:09:08.617 [2024-04-27 04:51:38.510374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.617 [2024-04-27 04:51:38.510736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.511121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.511467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.511814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.878 [2024-04-27 04:51:38.512150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cfc7 00:09:08.878 [2024-04-27 04:51:38.512483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.878 [2024-04-27 04:51:38.512845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:09:08.878 [2024-04-27 04:51:38.513218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.513567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.513904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.514271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.514631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.878 [2024-04-27 04:51:38.514965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=86abe27d 00:09:08.878 [2024-04-27 04:51:38.515321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.878 [2024-04-27 04:51:38.515671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.878 [2024-04-27 04:51:38.516021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.516378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.516759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.878 [2024-04-27 04:51:38.517112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.878 [2024-04-27 04:51:38.517492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.878 [2024-04-27 04:51:38.517813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.878 passed 00:09:08.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-27 04:51:38.518210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.878 [2024-04-27 04:51:38.518559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:09:08.878 [2024-04-27 04:51:38.518904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.519250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.519611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.519947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.520293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.878 [2024-04-27 04:51:38.520642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cfc7 00:09:08.878 passed 00:09:08.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-27 04:51:38.521050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.878 [2024-04-27 04:51:38.521388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:09:08.878 [2024-04-27 04:51:38.521749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.522051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.522369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.522697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.523059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.878 [2024-04-27 04:51:38.523355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=86abe27d 00:09:08.878 [2024-04-27 04:51:38.523719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.878 [2024-04-27 04:51:38.524044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.878 [2024-04-27 04:51:38.524357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.524725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.525049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.878 [2024-04-27 04:51:38.525390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.878 [2024-04-27 04:51:38.525738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.878 [2024-04-27 04:51:38.526046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.878 passed 00:09:08.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-27 04:51:38.526391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.878 [2024-04-27 04:51:38.526748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:09:08.878 [2024-04-27 04:51:38.527079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.527411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.527754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.528066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.528381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.878 [2024-04-27 04:51:38.528686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cfc7 00:09:08.878 passed 00:09:08.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-27 04:51:38.529029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.878 [2024-04-27 04:51:38.529337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:09:08.878 [2024-04-27 04:51:38.529675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.529994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.878 [2024-04-27 04:51:38.530311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.530642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.878 [2024-04-27 04:51:38.530975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.878 [2024-04-27 04:51:38.531261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=86abe27d 00:09:08.879 [2024-04-27 04:51:38.531614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.879 [2024-04-27 04:51:38.531926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:09:08.879 [2024-04-27 04:51:38.532235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.532527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.532856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.879 [2024-04-27 04:51:38.533178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.879 [2024-04-27 04:51:38.533513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.879 [2024-04-27 04:51:38.533815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bf88583ab637382 00:09:08.879 passed 00:09:08.879 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:08.879 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:08.879 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:08.879 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 04:51:38.578426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4d, Actual=fd4c 00:09:08.879 [2024-04-27 04:51:38.579616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=7c7, Actual=7c6 00:09:08.879 [2024-04-27 04:51:38.580754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.581873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.583013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.584134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.585273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=9ca5 00:09:08.879 [2024-04-27 04:51:38.586393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=8b6a 00:09:08.879 [2024-04-27 04:51:38.587536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ec, Actual=1ab753ed 00:09:08.879 [2024-04-27 04:51:38.588673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=b52300ac, Actual=b52300ad 00:09:08.879 [2024-04-27 04:51:38.589816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.590990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.592115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.593274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.594405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=6ee92b 00:09:08.879 [2024-04-27 04:51:38.595534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=77ecf0c6 00:09:08.879 [2024-04-27 04:51:38.596670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.879 [2024-04-27 04:51:38.597846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=51be4f42cae8c476, Actual=51be4f42cae8c477 00:09:08.879 [2024-04-27 04:51:38.598985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.600114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.601255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.879 [2024-04-27 04:51:38.602392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.879 [2024-04-27 04:51:38.603537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.879 passed 00:09:08.879 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-27 04:51:38.604718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=3ee03ec6ac5d6e83 00:09:08.879 [2024-04-27 04:51:38.605104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.879 [2024-04-27 04:51:38.605400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:09:08.879 [2024-04-27 04:51:38.605684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.605959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.606273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.879 [2024-04-27 04:51:38.606595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.879 [2024-04-27 04:51:38.606869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.879 [2024-04-27 04:51:38.607149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1cf0 00:09:08.879 [2024-04-27 04:51:38.607420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.879 [2024-04-27 04:51:38.607695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=969514db, Actual=969514da 00:09:08.879 [2024-04-27 04:51:38.607989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.608267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.608548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.879 [2024-04-27 04:51:38.608883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.879 [2024-04-27 04:51:38.609172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.879 [2024-04-27 04:51:38.609448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=545ae4b1 00:09:08.879 [2024-04-27 04:51:38.609744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.879 [2024-04-27 04:51:38.610025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44c1d431d33b4bb3, Actual=44c1d431d33b4bb2 00:09:08.879 [2024-04-27 04:51:38.610313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.610602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.610880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.879 [2024-04-27 04:51:38.611157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.879 [2024-04-27 04:51:38.611460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.879 [2024-04-27 04:51:38.611739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2b9fa5b5b58ee146 00:09:08.879 passed 00:09:08.879 Test: dix_sec_512_md_0_error ...passed 00:09:08.879 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-04-27 04:51:38.611815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:08.879 passed 00:09:08.879 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:08.879 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:08.879 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:08.879 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:08.879 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:08.879 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:08.879 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:08.879 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:08.879 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 04:51:38.655905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4d, Actual=fd4c 00:09:08.879 [2024-04-27 04:51:38.657103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=7c7, Actual=7c6 00:09:08.879 [2024-04-27 04:51:38.658245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.659373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.879 [2024-04-27 04:51:38.660538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.661689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.879 [2024-04-27 04:51:38.662823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=9ca5 00:09:08.879 [2024-04-27 04:51:38.663951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=8b6a 00:09:08.879 [2024-04-27 04:51:38.665084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ec, Actual=1ab753ed 00:09:08.879 [2024-04-27 04:51:38.666214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=b52300ac, Actual=b52300ad 00:09:08.880 [2024-04-27 04:51:38.667380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.668507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.669656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.880 [2024-04-27 04:51:38.670798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=5f 00:09:08.880 [2024-04-27 04:51:38.671930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=6ee92b 00:09:08.880 [2024-04-27 04:51:38.673069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=77ecf0c6 00:09:08.880 [2024-04-27 04:51:38.674224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.880 [2024-04-27 04:51:38.675361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=51be4f42cae8c476, Actual=51be4f42cae8c477 00:09:08.880 [2024-04-27 04:51:38.676478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.677605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.678746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.880 [2024-04-27 04:51:38.679869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=10000005e 00:09:08.880 [2024-04-27 04:51:38.681024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.880 [2024-04-27 04:51:38.682148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=3ee03ec6ac5d6e83 00:09:08.880 passed 00:09:08.880 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-27 04:51:38.682551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:09:08.880 [2024-04-27 04:51:38.682834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:09:08.880 [2024-04-27 04:51:38.683122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.683420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.683721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.880 [2024-04-27 04:51:38.683997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.880 [2024-04-27 04:51:38.684276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9ca5 00:09:08.880 [2024-04-27 04:51:38.684550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1cf0 00:09:08.880 [2024-04-27 04:51:38.684845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:09:08.880 [2024-04-27 04:51:38.685126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=969514db, Actual=969514da 00:09:08.880 [2024-04-27 04:51:38.685426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.685702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.685976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.880 [2024-04-27 04:51:38.686258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:09:08.880 [2024-04-27 04:51:38.686543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6ee92b 00:09:08.880 [2024-04-27 04:51:38.686841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=545ae4b1 00:09:08.880 [2024-04-27 04:51:38.687147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:09:08.880 [2024-04-27 04:51:38.687429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44c1d431d33b4bb3, Actual=44c1d431d33b4bb2 00:09:08.880 [2024-04-27 04:51:38.687696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.687972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:09:08.880 [2024-04-27 04:51:38.688231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.880 [2024-04-27 04:51:38.688504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:09:08.880 [2024-04-27 04:51:38.688789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ebd1d5af7635895e 00:09:08.880 [2024-04-27 04:51:38.689071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2b9fa5b5b58ee146 00:09:08.880 passed 00:09:08.880 Test: set_md_interleave_iovs_test ...passed 00:09:08.880 Test: set_md_interleave_iovs_split_test ...passed 00:09:08.880 Test: dif_generate_stream_pi_16_test ...passed 00:09:08.880 Test: dif_generate_stream_test ...passed 00:09:08.880 Test: set_md_interleave_iovs_alignment_test ...[2024-04-27 04:51:38.696735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:08.880 passed 00:09:08.880 Test: dif_generate_split_test ...passed 00:09:08.880 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:08.880 Test: dif_verify_split_test ...passed 00:09:08.880 Test: dif_verify_stream_multi_segments_test ...passed 00:09:08.880 Test: update_crc32c_pi_16_test ...passed 00:09:08.880 Test: update_crc32c_test ...passed 00:09:08.880 Test: dif_update_crc32c_split_test ...passed 00:09:08.880 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:08.880 Test: get_range_with_md_test ...passed 00:09:08.880 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:08.880 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:08.880 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:08.880 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:08.880 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:08.880 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:08.880 Test: dif_generate_and_verify_unmap_test ...passed 00:09:08.880 00:09:08.880 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.880 suites 1 1 n/a 0 0 00:09:08.880 tests 79 79 79 0 0 00:09:08.880 asserts 3584 3584 3584 0 n/a 00:09:08.880 00:09:08.880 Elapsed time = 0.363 seconds 00:09:08.880 04:51:38 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:09.139 00:09:09.139 00:09:09.139 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.139 http://cunit.sourceforge.net/ 00:09:09.139 00:09:09.139 00:09:09.139 Suite: iov 00:09:09.139 Test: test_single_iov ...passed 00:09:09.139 Test: test_simple_iov ...passed 00:09:09.139 Test: test_complex_iov ...passed 00:09:09.139 Test: test_iovs_to_buf ...passed 00:09:09.139 Test: test_buf_to_iovs ...passed 00:09:09.139 Test: test_memset ...passed 00:09:09.139 Test: test_iov_one ...passed 00:09:09.139 Test: test_iov_xfer ...passed 00:09:09.139 00:09:09.139 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.139 suites 1 1 n/a 0 0 00:09:09.139 tests 8 8 8 0 0 00:09:09.139 asserts 156 156 156 0 n/a 00:09:09.139 00:09:09.139 Elapsed time = 0.000 seconds 00:09:09.139 04:51:38 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:09.139 00:09:09.139 00:09:09.139 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.139 http://cunit.sourceforge.net/ 00:09:09.139 00:09:09.139 00:09:09.139 Suite: math 00:09:09.139 Test: test_serial_number_arithmetic ...passed 00:09:09.139 Suite: erase 00:09:09.139 Test: test_memset_s ...passed 00:09:09.139 00:09:09.139 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.139 suites 2 2 n/a 0 0 00:09:09.139 tests 2 2 2 0 0 00:09:09.139 asserts 18 18 18 0 n/a 00:09:09.139 00:09:09.139 Elapsed time = 0.000 seconds 00:09:09.139 04:51:38 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:09.139 00:09:09.139 00:09:09.139 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.139 http://cunit.sourceforge.net/ 00:09:09.139 00:09:09.139 00:09:09.139 Suite: pipe 00:09:09.140 Test: test_create_destroy ...passed 00:09:09.140 Test: test_write_get_buffer ...passed 00:09:09.140 Test: test_write_advance ...passed 00:09:09.140 Test: test_read_get_buffer ...passed 00:09:09.140 Test: test_read_advance ...passed 00:09:09.140 Test: test_data ...passed 00:09:09.140 00:09:09.140 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.140 suites 1 1 n/a 0 0 00:09:09.140 tests 6 6 6 0 0 00:09:09.140 asserts 250 250 250 0 n/a 00:09:09.140 00:09:09.140 Elapsed time = 0.000 seconds 00:09:09.140 04:51:38 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:09.140 00:09:09.140 00:09:09.140 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.140 http://cunit.sourceforge.net/ 00:09:09.140 00:09:09.140 00:09:09.140 Suite: xor 00:09:09.140 Test: test_xor_gen ...passed 00:09:09.140 00:09:09.140 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.140 suites 1 1 n/a 0 0 00:09:09.140 tests 1 1 1 0 0 00:09:09.140 asserts 17 17 17 0 n/a 00:09:09.140 00:09:09.140 Elapsed time = 0.022 seconds 00:09:09.140 00:09:09.140 real 0m0.737s 00:09:09.140 user 0m0.565s 00:09:09.140 sys 0m0.177s 00:09:09.140 04:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.140 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 ************************************ 00:09:09.140 END TEST unittest_util 00:09:09.140 ************************************ 00:09:09.140 04:51:38 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:09.140 04:51:38 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:09.140 04:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.140 04:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.140 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 ************************************ 00:09:09.140 START TEST unittest_vhost 00:09:09.140 ************************************ 00:09:09.140 04:51:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:09.140 00:09:09.140 00:09:09.140 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.140 http://cunit.sourceforge.net/ 00:09:09.140 00:09:09.140 00:09:09.140 Suite: vhost_suite 00:09:09.140 Test: desc_to_iov_test ...[2024-04-27 04:51:38.958521] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:09.140 passed 00:09:09.140 Test: create_controller_test ...[2024-04-27 04:51:38.963111] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:09.140 [2024-04-27 04:51:38.963247] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:09.140 [2024-04-27 04:51:38.963374] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:09.140 [2024-04-27 04:51:38.963473] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:09.140 [2024-04-27 04:51:38.963530] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:09.140 [2024-04-27 04:51:38.963642] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-27 04:51:38.964660] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:09.140 passed 00:09:09.140 Test: session_find_by_vid_test ...passed 00:09:09.140 Test: remove_controller_test ...[2024-04-27 04:51:38.966722] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:09.140 passed 00:09:09.140 Test: vq_avail_ring_get_test ...passed 00:09:09.140 Test: vq_packed_ring_test ...passed 00:09:09.140 Test: vhost_blk_construct_test ...passed 00:09:09.140 00:09:09.140 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.140 suites 1 1 n/a 0 0 00:09:09.140 tests 7 7 7 0 0 00:09:09.140 asserts 145 145 145 0 n/a 00:09:09.140 00:09:09.140 Elapsed time = 0.012 seconds 00:09:09.140 00:09:09.140 real 0m0.048s 00:09:09.140 user 0m0.028s 00:09:09.140 sys 0m0.020s 00:09:09.140 04:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.140 ************************************ 00:09:09.140 END TEST unittest_vhost 00:09:09.140 ************************************ 00:09:09.140 04:51:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 04:51:39 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:09.140 04:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.140 04:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.140 04:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.140 ************************************ 00:09:09.140 START TEST unittest_dma 00:09:09.140 ************************************ 00:09:09.400 04:51:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:09.400 00:09:09.400 00:09:09.400 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.400 http://cunit.sourceforge.net/ 00:09:09.400 00:09:09.400 00:09:09.400 Suite: dma_suite 00:09:09.400 Test: test_dma ...[2024-04-27 04:51:39.047882] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:09.400 passed 00:09:09.400 00:09:09.400 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.400 suites 1 1 n/a 0 0 00:09:09.400 tests 1 1 1 0 0 00:09:09.400 asserts 50 50 50 0 n/a 00:09:09.400 00:09:09.400 Elapsed time = 0.001 seconds 00:09:09.400 00:09:09.400 real 0m0.028s 00:09:09.400 user 0m0.016s 00:09:09.400 sys 0m0.012s 00:09:09.400 04:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.400 04:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.400 ************************************ 00:09:09.400 END TEST unittest_dma 00:09:09.400 ************************************ 00:09:09.400 04:51:39 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:09:09.400 04:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.400 04:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.400 04:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.400 ************************************ 00:09:09.400 START TEST unittest_init 00:09:09.400 ************************************ 00:09:09.400 04:51:39 -- common/autotest_common.sh@1104 -- # unittest_init 00:09:09.400 04:51:39 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:09.400 00:09:09.400 00:09:09.400 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.400 http://cunit.sourceforge.net/ 00:09:09.400 00:09:09.400 00:09:09.400 Suite: subsystem_suite 00:09:09.400 Test: subsystem_sort_test_depends_on_single ...passed 00:09:09.400 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:09.400 Test: subsystem_sort_test_missing_dependency ...[2024-04-27 04:51:39.127569] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:09.401 passed 00:09:09.401 00:09:09.401 [2024-04-27 04:51:39.128049] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:09.401 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.401 suites 1 1 n/a 0 0 00:09:09.401 tests 3 3 3 0 0 00:09:09.401 asserts 20 20 20 0 n/a 00:09:09.401 00:09:09.401 Elapsed time = 0.001 seconds 00:09:09.401 00:09:09.401 real 0m0.034s 00:09:09.401 user 0m0.021s 00:09:09.401 sys 0m0.013s 00:09:09.401 04:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.401 04:51:39 -- common/autotest_common.sh@10 -- # set +x 00:09:09.401 ************************************ 00:09:09.401 END TEST unittest_init 00:09:09.401 ************************************ 00:09:09.401 04:51:39 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:09:09.401 04:51:39 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:09.401 04:51:39 -- unit/unittest.sh@290 -- # hostname 00:09:09.401 04:51:39 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:09.659 geninfo: WARNING: invalid characters removed from testname! 00:09:41.728 04:52:06 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:41.986 04:52:11 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:45.290 04:52:15 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:48.578 04:52:18 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:51.864 04:52:21 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:54.423 04:52:24 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:57.706 04:52:27 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:00.241 04:52:29 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:00.241 04:52:29 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:00.500 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:00.500 Found 309 entries. 00:10:00.500 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:00.500 Writing .css and .png files. 00:10:00.500 Generating output. 00:10:00.757 Processing file include/linux/virtio_ring.h 00:10:01.015 Processing file include/spdk/thread.h 00:10:01.015 Processing file include/spdk/base64.h 00:10:01.015 Processing file include/spdk/util.h 00:10:01.015 Processing file include/spdk/nvmf_transport.h 00:10:01.015 Processing file include/spdk/histogram_data.h 00:10:01.015 Processing file include/spdk/nvme.h 00:10:01.015 Processing file include/spdk/bdev_module.h 00:10:01.015 Processing file include/spdk/nvme_spec.h 00:10:01.015 Processing file include/spdk/endian.h 00:10:01.015 Processing file include/spdk/mmio.h 00:10:01.015 Processing file include/spdk/trace.h 00:10:01.015 Processing file include/spdk_internal/virtio.h 00:10:01.015 Processing file include/spdk_internal/sgl.h 00:10:01.015 Processing file include/spdk_internal/sock.h 00:10:01.015 Processing file include/spdk_internal/rdma.h 00:10:01.015 Processing file include/spdk_internal/utf.h 00:10:01.015 Processing file include/spdk_internal/nvme_tcp.h 00:10:01.273 Processing file lib/accel/accel.c 00:10:01.273 Processing file lib/accel/accel_rpc.c 00:10:01.273 Processing file lib/accel/accel_sw.c 00:10:01.531 Processing file lib/bdev/bdev_zone.c 00:10:01.531 Processing file lib/bdev/bdev.c 00:10:01.531 Processing file lib/bdev/scsi_nvme.c 00:10:01.531 Processing file lib/bdev/bdev_rpc.c 00:10:01.531 Processing file lib/bdev/part.c 00:10:02.098 Processing file lib/blob/zeroes.c 00:10:02.098 Processing file lib/blob/request.c 00:10:02.098 Processing file lib/blob/blobstore.h 00:10:02.098 Processing file lib/blob/blobstore.c 00:10:02.098 Processing file lib/blob/blob_bs_dev.c 00:10:02.098 Processing file lib/blobfs/tree.c 00:10:02.098 Processing file lib/blobfs/blobfs.c 00:10:02.098 Processing file lib/conf/conf.c 00:10:02.098 Processing file lib/dma/dma.c 00:10:02.665 Processing file lib/env_dpdk/pci.c 00:10:02.665 Processing file lib/env_dpdk/sigbus_handler.c 00:10:02.665 Processing file lib/env_dpdk/pci_dpdk.c 00:10:02.665 Processing file lib/env_dpdk/pci_idxd.c 00:10:02.665 Processing file lib/env_dpdk/pci_vmd.c 00:10:02.665 Processing file lib/env_dpdk/init.c 00:10:02.665 Processing file lib/env_dpdk/threads.c 00:10:02.665 Processing file lib/env_dpdk/env.c 00:10:02.665 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:02.665 Processing file lib/env_dpdk/pci_event.c 00:10:02.665 Processing file lib/env_dpdk/pci_virtio.c 00:10:02.665 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:02.665 Processing file lib/env_dpdk/pci_ioat.c 00:10:02.665 Processing file lib/env_dpdk/memory.c 00:10:02.665 Processing file lib/event/log_rpc.c 00:10:02.665 Processing file lib/event/scheduler_static.c 00:10:02.665 Processing file lib/event/reactor.c 00:10:02.665 Processing file lib/event/app.c 00:10:02.665 Processing file lib/event/app_rpc.c 00:10:03.230 Processing file lib/ftl/ftl_l2p_cache.c 00:10:03.230 Processing file lib/ftl/ftl_io.c 00:10:03.230 Processing file lib/ftl/ftl_debug.h 00:10:03.230 Processing file lib/ftl/ftl_writer.h 00:10:03.230 Processing file lib/ftl/ftl_reloc.c 00:10:03.230 Processing file lib/ftl/ftl_debug.c 00:10:03.230 Processing file lib/ftl/ftl_l2p.c 00:10:03.230 Processing file lib/ftl/ftl_io.h 00:10:03.230 Processing file lib/ftl/ftl_sb.c 00:10:03.230 Processing file lib/ftl/ftl_band.h 00:10:03.230 Processing file lib/ftl/ftl_nv_cache.c 00:10:03.230 Processing file lib/ftl/ftl_trace.c 00:10:03.230 Processing file lib/ftl/ftl_writer.c 00:10:03.230 Processing file lib/ftl/ftl_init.c 00:10:03.230 Processing file lib/ftl/ftl_core.h 00:10:03.230 Processing file lib/ftl/ftl_band_ops.c 00:10:03.230 Processing file lib/ftl/ftl_band.c 00:10:03.230 Processing file lib/ftl/ftl_p2l.c 00:10:03.230 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:03.230 Processing file lib/ftl/ftl_core.c 00:10:03.230 Processing file lib/ftl/ftl_rq.c 00:10:03.230 Processing file lib/ftl/ftl_l2p_flat.c 00:10:03.230 Processing file lib/ftl/ftl_layout.c 00:10:03.230 Processing file lib/ftl/ftl_nv_cache.h 00:10:03.230 Processing file lib/ftl/base/ftl_base_dev.c 00:10:03.230 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:03.487 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:03.487 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:03.487 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:03.745 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:03.745 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:03.745 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:03.745 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:04.003 Processing file lib/ftl/utils/ftl_property.h 00:10:04.003 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:04.003 Processing file lib/ftl/utils/ftl_df.h 00:10:04.003 Processing file lib/ftl/utils/ftl_property.c 00:10:04.003 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:04.003 Processing file lib/ftl/utils/ftl_md.c 00:10:04.003 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:04.003 Processing file lib/ftl/utils/ftl_mempool.c 00:10:04.003 Processing file lib/ftl/utils/ftl_conf.c 00:10:04.003 Processing file lib/idxd/idxd_user.c 00:10:04.003 Processing file lib/idxd/idxd_internal.h 00:10:04.003 Processing file lib/idxd/idxd.c 00:10:04.003 Processing file lib/init/subsystem.c 00:10:04.003 Processing file lib/init/subsystem_rpc.c 00:10:04.003 Processing file lib/init/json_config.c 00:10:04.003 Processing file lib/init/rpc.c 00:10:04.261 Processing file lib/ioat/ioat.c 00:10:04.261 Processing file lib/ioat/ioat_internal.h 00:10:04.519 Processing file lib/iscsi/conn.c 00:10:04.519 Processing file lib/iscsi/md5.c 00:10:04.519 Processing file lib/iscsi/iscsi_rpc.c 00:10:04.519 Processing file lib/iscsi/tgt_node.c 00:10:04.519 Processing file lib/iscsi/task.h 00:10:04.519 Processing file lib/iscsi/init_grp.c 00:10:04.519 Processing file lib/iscsi/task.c 00:10:04.519 Processing file lib/iscsi/portal_grp.c 00:10:04.519 Processing file lib/iscsi/param.c 00:10:04.519 Processing file lib/iscsi/iscsi.c 00:10:04.519 Processing file lib/iscsi/iscsi_subsystem.c 00:10:04.519 Processing file lib/iscsi/iscsi.h 00:10:04.777 Processing file lib/json/json_parse.c 00:10:04.777 Processing file lib/json/json_write.c 00:10:04.777 Processing file lib/json/json_util.c 00:10:04.777 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:04.777 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:04.777 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:04.777 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:04.777 Processing file lib/log/log.c 00:10:04.777 Processing file lib/log/log_deprecated.c 00:10:04.777 Processing file lib/log/log_flags.c 00:10:05.036 Processing file lib/lvol/lvol.c 00:10:05.036 Processing file lib/nbd/nbd.c 00:10:05.036 Processing file lib/nbd/nbd_rpc.c 00:10:05.294 Processing file lib/notify/notify.c 00:10:05.294 Processing file lib/notify/notify_rpc.c 00:10:05.868 Processing file lib/nvme/nvme_zns.c 00:10:05.868 Processing file lib/nvme/nvme_io_msg.c 00:10:05.868 Processing file lib/nvme/nvme.c 00:10:05.868 Processing file lib/nvme/nvme_poll_group.c 00:10:05.868 Processing file lib/nvme/nvme_vfio_user.c 00:10:05.868 Processing file lib/nvme/nvme_ctrlr.c 00:10:05.868 Processing file lib/nvme/nvme_opal.c 00:10:05.868 Processing file lib/nvme/nvme_rdma.c 00:10:05.868 Processing file lib/nvme/nvme_discovery.c 00:10:05.868 Processing file lib/nvme/nvme_quirks.c 00:10:05.868 Processing file lib/nvme/nvme_ns.c 00:10:05.868 Processing file lib/nvme/nvme_pcie.c 00:10:05.868 Processing file lib/nvme/nvme_pcie_internal.h 00:10:05.868 Processing file lib/nvme/nvme_cuse.c 00:10:05.868 Processing file lib/nvme/nvme_transport.c 00:10:05.868 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:05.868 Processing file lib/nvme/nvme_fabric.c 00:10:05.868 Processing file lib/nvme/nvme_pcie_common.c 00:10:05.868 Processing file lib/nvme/nvme_qpair.c 00:10:05.868 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:05.868 Processing file lib/nvme/nvme_internal.h 00:10:05.868 Processing file lib/nvme/nvme_tcp.c 00:10:05.868 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:05.868 Processing file lib/nvme/nvme_ns_cmd.c 00:10:06.459 Processing file lib/nvmf/rdma.c 00:10:06.459 Processing file lib/nvmf/transport.c 00:10:06.459 Processing file lib/nvmf/ctrlr_bdev.c 00:10:06.459 Processing file lib/nvmf/nvmf_internal.h 00:10:06.459 Processing file lib/nvmf/nvmf_rpc.c 00:10:06.459 Processing file lib/nvmf/nvmf.c 00:10:06.459 Processing file lib/nvmf/tcp.c 00:10:06.459 Processing file lib/nvmf/ctrlr.c 00:10:06.459 Processing file lib/nvmf/ctrlr_discovery.c 00:10:06.459 Processing file lib/nvmf/subsystem.c 00:10:06.459 Processing file lib/rdma/rdma_verbs.c 00:10:06.459 Processing file lib/rdma/common.c 00:10:06.718 Processing file lib/rpc/rpc.c 00:10:06.718 Processing file lib/scsi/scsi_bdev.c 00:10:06.718 Processing file lib/scsi/port.c 00:10:06.718 Processing file lib/scsi/scsi_rpc.c 00:10:06.718 Processing file lib/scsi/dev.c 00:10:06.718 Processing file lib/scsi/scsi.c 00:10:06.718 Processing file lib/scsi/scsi_pr.c 00:10:06.718 Processing file lib/scsi/lun.c 00:10:06.718 Processing file lib/scsi/task.c 00:10:06.976 Processing file lib/sock/sock.c 00:10:06.977 Processing file lib/sock/sock_rpc.c 00:10:06.977 Processing file lib/thread/thread.c 00:10:06.977 Processing file lib/thread/iobuf.c 00:10:07.235 Processing file lib/trace/trace_rpc.c 00:10:07.235 Processing file lib/trace/trace_flags.c 00:10:07.235 Processing file lib/trace/trace.c 00:10:07.235 Processing file lib/trace_parser/trace.cpp 00:10:07.235 Processing file lib/ut/ut.c 00:10:07.495 Processing file lib/ut_mock/mock.c 00:10:07.754 Processing file lib/util/crc32c.c 00:10:07.754 Processing file lib/util/zipf.c 00:10:07.754 Processing file lib/util/fd_group.c 00:10:07.754 Processing file lib/util/crc32.c 00:10:07.754 Processing file lib/util/crc32_ieee.c 00:10:07.754 Processing file lib/util/cpuset.c 00:10:07.754 Processing file lib/util/crc16.c 00:10:07.754 Processing file lib/util/uuid.c 00:10:07.754 Processing file lib/util/hexlify.c 00:10:07.754 Processing file lib/util/crc64.c 00:10:07.754 Processing file lib/util/base64.c 00:10:07.754 Processing file lib/util/math.c 00:10:07.754 Processing file lib/util/file.c 00:10:07.754 Processing file lib/util/bit_array.c 00:10:07.754 Processing file lib/util/iov.c 00:10:07.754 Processing file lib/util/dif.c 00:10:07.754 Processing file lib/util/string.c 00:10:07.754 Processing file lib/util/strerror_tls.c 00:10:07.754 Processing file lib/util/fd.c 00:10:07.754 Processing file lib/util/pipe.c 00:10:07.754 Processing file lib/util/xor.c 00:10:08.013 Processing file lib/vfio_user/host/vfio_user.c 00:10:08.013 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:08.013 Processing file lib/vhost/vhost_scsi.c 00:10:08.013 Processing file lib/vhost/vhost_blk.c 00:10:08.013 Processing file lib/vhost/rte_vhost_user.c 00:10:08.013 Processing file lib/vhost/vhost_internal.h 00:10:08.013 Processing file lib/vhost/vhost.c 00:10:08.013 Processing file lib/vhost/vhost_rpc.c 00:10:08.272 Processing file lib/virtio/virtio_vfio_user.c 00:10:08.272 Processing file lib/virtio/virtio_pci.c 00:10:08.272 Processing file lib/virtio/virtio.c 00:10:08.272 Processing file lib/virtio/virtio_vhost_user.c 00:10:08.272 Processing file lib/vmd/vmd.c 00:10:08.272 Processing file lib/vmd/led.c 00:10:08.531 Processing file module/accel/dsa/accel_dsa.c 00:10:08.531 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:08.531 Processing file module/accel/error/accel_error.c 00:10:08.531 Processing file module/accel/error/accel_error_rpc.c 00:10:08.531 Processing file module/accel/iaa/accel_iaa.c 00:10:08.531 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:08.790 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:08.790 Processing file module/accel/ioat/accel_ioat.c 00:10:08.790 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:08.790 Processing file module/bdev/aio/bdev_aio.c 00:10:08.790 Processing file module/bdev/delay/vbdev_delay.c 00:10:08.790 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:09.048 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:09.048 Processing file module/bdev/error/vbdev_error.c 00:10:09.048 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:09.048 Processing file module/bdev/ftl/bdev_ftl.c 00:10:09.306 Processing file module/bdev/gpt/gpt.h 00:10:09.306 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:09.306 Processing file module/bdev/gpt/gpt.c 00:10:09.306 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:09.306 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:09.306 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:09.306 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:09.564 Processing file module/bdev/malloc/bdev_malloc.c 00:10:09.564 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:09.564 Processing file module/bdev/null/bdev_null_rpc.c 00:10:09.564 Processing file module/bdev/null/bdev_null.c 00:10:09.823 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:09.823 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:09.823 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:09.823 Processing file module/bdev/nvme/nvme_rpc.c 00:10:09.823 Processing file module/bdev/nvme/bdev_nvme.c 00:10:09.823 Processing file module/bdev/nvme/vbdev_opal.c 00:10:09.823 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:10.081 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:10.081 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:10.339 Processing file module/bdev/raid/raid0.c 00:10:10.339 Processing file module/bdev/raid/bdev_raid.c 00:10:10.339 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:10.339 Processing file module/bdev/raid/raid1.c 00:10:10.339 Processing file module/bdev/raid/bdev_raid.h 00:10:10.339 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:10.339 Processing file module/bdev/raid/raid5f.c 00:10:10.339 Processing file module/bdev/raid/concat.c 00:10:10.339 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:10.339 Processing file module/bdev/split/vbdev_split.c 00:10:10.339 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:10.339 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:10.339 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:10.598 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:10.598 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:10.598 Processing file module/blob/bdev/blob_bdev.c 00:10:10.598 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:10.598 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:10.855 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:10.856 Processing file module/event/subsystems/accel/accel.c 00:10:10.856 Processing file module/event/subsystems/bdev/bdev.c 00:10:10.856 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:10.856 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:11.114 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:11.114 Processing file module/event/subsystems/nbd/nbd.c 00:10:11.114 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:11.114 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:11.373 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:11.373 Processing file module/event/subsystems/scsi/scsi.c 00:10:11.373 Processing file module/event/subsystems/sock/sock.c 00:10:11.373 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:11.631 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:11.631 Processing file module/event/subsystems/vmd/vmd.c 00:10:11.631 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:11.631 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:11.631 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:11.890 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:11.890 Processing file module/sock/sock_kernel.h 00:10:11.890 Processing file module/sock/posix/posix.c 00:10:11.890 Writing directory view page. 00:10:11.890 Overall coverage rate: 00:10:11.890 lines......: 39.1% (39213 of 100368 lines) 00:10:11.890 functions..: 42.7% (3582 of 8379 functions) 00:10:11.890 00:10:11.890 00:10:11.890 ===================== 00:10:11.890 All unit tests passed 00:10:11.890 ===================== 00:10:11.890 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:11.890 04:52:41 -- unit/unittest.sh@302 -- # set +x 00:10:11.890 00:10:11.890 00:10:11.890 00:10:11.890 real 3m25.432s 00:10:11.890 user 2m59.011s 00:10:11.890 sys 0m16.570s 00:10:11.890 04:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.890 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:11.890 ************************************ 00:10:11.890 END TEST unittest 00:10:11.890 ************************************ 00:10:12.149 04:52:41 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:10:12.149 04:52:41 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:10:12.149 04:52:41 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:10:12.149 04:52:41 -- spdk/autotest.sh@173 -- # timing_enter lib 00:10:12.149 04:52:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:12.149 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 04:52:41 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:12.149 04:52:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:12.149 04:52:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.149 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 ************************************ 00:10:12.149 START TEST env 00:10:12.149 ************************************ 00:10:12.149 04:52:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:12.149 * Looking for test storage... 00:10:12.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:12.149 04:52:41 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:12.149 04:52:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:12.149 04:52:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.149 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 ************************************ 00:10:12.149 START TEST env_memory 00:10:12.149 ************************************ 00:10:12.149 04:52:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:12.149 00:10:12.149 00:10:12.149 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.149 http://cunit.sourceforge.net/ 00:10:12.149 00:10:12.149 00:10:12.149 Suite: memory 00:10:12.149 Test: alloc and free memory map ...[2024-04-27 04:52:41.981233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:12.149 passed 00:10:12.149 Test: mem map translation ...[2024-04-27 04:52:42.031211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:12.149 [2024-04-27 04:52:42.031365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:12.149 [2024-04-27 04:52:42.031503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:12.149 [2024-04-27 04:52:42.031601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:12.407 passed 00:10:12.407 Test: mem map registration ...[2024-04-27 04:52:42.118005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:12.407 [2024-04-27 04:52:42.118143] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:12.407 passed 00:10:12.407 Test: mem map adjacent registrations ...passed 00:10:12.407 00:10:12.407 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.407 suites 1 1 n/a 0 0 00:10:12.407 tests 4 4 4 0 0 00:10:12.407 asserts 152 152 152 0 n/a 00:10:12.407 00:10:12.407 Elapsed time = 0.299 seconds 00:10:12.407 00:10:12.407 real 0m0.334s 00:10:12.407 user 0m0.314s 00:10:12.407 sys 0m0.020s 00:10:12.407 04:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.407 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:12.407 ************************************ 00:10:12.407 END TEST env_memory 00:10:12.407 ************************************ 00:10:12.407 04:52:42 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:12.407 04:52:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:12.407 04:52:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:12.407 04:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:12.668 ************************************ 00:10:12.668 START TEST env_vtophys 00:10:12.668 ************************************ 00:10:12.668 04:52:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:12.668 EAL: lib.eal log level changed from notice to debug 00:10:12.668 EAL: Detected lcore 0 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 1 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 2 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 3 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 4 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 5 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 6 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 7 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 8 as core 0 on socket 0 00:10:12.668 EAL: Detected lcore 9 as core 0 on socket 0 00:10:12.668 EAL: Maximum logical cores by configuration: 128 00:10:12.668 EAL: Detected CPU lcores: 10 00:10:12.668 EAL: Detected NUMA nodes: 1 00:10:12.668 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:10:12.668 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:12.668 EAL: Checking presence of .so 'librte_eal.so' 00:10:12.668 EAL: Detected static linkage of DPDK 00:10:12.668 EAL: No shared files mode enabled, IPC will be disabled 00:10:12.668 EAL: Selected IOVA mode 'PA' 00:10:12.668 EAL: Probing VFIO support... 00:10:12.668 EAL: IOMMU type 1 (Type 1) is supported 00:10:12.668 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:12.668 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:12.668 EAL: VFIO support initialized 00:10:12.668 EAL: Ask a virtual area of 0x2e000 bytes 00:10:12.668 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:12.668 EAL: Setting up physically contiguous memory... 00:10:12.668 EAL: Setting maximum number of open files to 1048576 00:10:12.668 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:12.668 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:12.668 EAL: Ask a virtual area of 0x61000 bytes 00:10:12.668 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:12.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:12.668 EAL: Ask a virtual area of 0x400000000 bytes 00:10:12.668 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:12.668 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:12.668 EAL: Ask a virtual area of 0x61000 bytes 00:10:12.668 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:12.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:12.668 EAL: Ask a virtual area of 0x400000000 bytes 00:10:12.668 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:12.668 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:12.668 EAL: Ask a virtual area of 0x61000 bytes 00:10:12.668 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:12.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:12.668 EAL: Ask a virtual area of 0x400000000 bytes 00:10:12.668 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:12.668 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:12.668 EAL: Ask a virtual area of 0x61000 bytes 00:10:12.668 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:12.668 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:12.668 EAL: Ask a virtual area of 0x400000000 bytes 00:10:12.668 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:12.668 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:12.668 EAL: Hugepages will be freed exactly as allocated. 00:10:12.668 EAL: No shared files mode enabled, IPC is disabled 00:10:12.668 EAL: No shared files mode enabled, IPC is disabled 00:10:12.668 EAL: TSC frequency is ~2200000 KHz 00:10:12.668 EAL: Main lcore 0 is ready (tid=7f206ff11a80;cpuset=[0]) 00:10:12.668 EAL: Trying to obtain current memory policy. 00:10:12.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:12.668 EAL: Restoring previous memory policy: 0 00:10:12.668 EAL: request: mp_malloc_sync 00:10:12.668 EAL: No shared files mode enabled, IPC is disabled 00:10:12.668 EAL: Heap on socket 0 was expanded by 2MB 00:10:12.668 EAL: No shared files mode enabled, IPC is disabled 00:10:12.668 EAL: Mem event callback 'spdk:(nil)' registered 00:10:12.668 00:10:12.668 00:10:12.668 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.668 http://cunit.sourceforge.net/ 00:10:12.668 00:10:12.668 00:10:12.668 Suite: components_suite 00:10:13.241 Test: vtophys_malloc_test ...passed 00:10:13.242 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 4MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was shrunk by 4MB 00:10:13.242 EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 6MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was shrunk by 6MB 00:10:13.242 EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 10MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was shrunk by 10MB 00:10:13.242 EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 18MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was shrunk by 18MB 00:10:13.242 EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 34MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was shrunk by 34MB 00:10:13.242 EAL: Trying to obtain current memory policy. 00:10:13.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.242 EAL: Restoring previous memory policy: 0 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.242 EAL: request: mp_malloc_sync 00:10:13.242 EAL: No shared files mode enabled, IPC is disabled 00:10:13.242 EAL: Heap on socket 0 was expanded by 66MB 00:10:13.242 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.501 EAL: request: mp_malloc_sync 00:10:13.501 EAL: No shared files mode enabled, IPC is disabled 00:10:13.501 EAL: Heap on socket 0 was shrunk by 66MB 00:10:13.501 EAL: Trying to obtain current memory policy. 00:10:13.501 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.501 EAL: Restoring previous memory policy: 0 00:10:13.501 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.501 EAL: request: mp_malloc_sync 00:10:13.501 EAL: No shared files mode enabled, IPC is disabled 00:10:13.501 EAL: Heap on socket 0 was expanded by 130MB 00:10:13.501 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.501 EAL: request: mp_malloc_sync 00:10:13.501 EAL: No shared files mode enabled, IPC is disabled 00:10:13.501 EAL: Heap on socket 0 was shrunk by 130MB 00:10:13.501 EAL: Trying to obtain current memory policy. 00:10:13.501 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:13.759 EAL: Restoring previous memory policy: 0 00:10:13.759 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.759 EAL: request: mp_malloc_sync 00:10:13.759 EAL: No shared files mode enabled, IPC is disabled 00:10:13.759 EAL: Heap on socket 0 was expanded by 258MB 00:10:13.759 EAL: Calling mem event callback 'spdk:(nil)' 00:10:13.759 EAL: request: mp_malloc_sync 00:10:13.759 EAL: No shared files mode enabled, IPC is disabled 00:10:13.759 EAL: Heap on socket 0 was shrunk by 258MB 00:10:13.759 EAL: Trying to obtain current memory policy. 00:10:13.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:14.016 EAL: Restoring previous memory policy: 0 00:10:14.016 EAL: Calling mem event callback 'spdk:(nil)' 00:10:14.016 EAL: request: mp_malloc_sync 00:10:14.016 EAL: No shared files mode enabled, IPC is disabled 00:10:14.016 EAL: Heap on socket 0 was expanded by 514MB 00:10:14.274 EAL: Calling mem event callback 'spdk:(nil)' 00:10:14.533 EAL: request: mp_malloc_sync 00:10:14.533 EAL: No shared files mode enabled, IPC is disabled 00:10:14.533 EAL: Heap on socket 0 was shrunk by 514MB 00:10:14.534 EAL: Trying to obtain current memory policy. 00:10:14.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:14.793 EAL: Restoring previous memory policy: 0 00:10:14.793 EAL: Calling mem event callback 'spdk:(nil)' 00:10:14.793 EAL: request: mp_malloc_sync 00:10:14.793 EAL: No shared files mode enabled, IPC is disabled 00:10:14.793 EAL: Heap on socket 0 was expanded by 1026MB 00:10:15.361 EAL: Calling mem event callback 'spdk:(nil)' 00:10:15.619 EAL: request: mp_malloc_sync 00:10:15.619 EAL: No shared files mode enabled, IPC is disabled 00:10:15.619 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:15.619 passed 00:10:15.619 00:10:15.619 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.619 suites 1 1 n/a 0 0 00:10:15.619 tests 2 2 2 0 0 00:10:15.619 asserts 6317 6317 6317 0 n/a 00:10:15.619 00:10:15.619 Elapsed time = 2.756 seconds 00:10:15.619 EAL: Calling mem event callback 'spdk:(nil)' 00:10:15.619 EAL: request: mp_malloc_sync 00:10:15.619 EAL: No shared files mode enabled, IPC is disabled 00:10:15.619 EAL: Heap on socket 0 was shrunk by 2MB 00:10:15.619 EAL: No shared files mode enabled, IPC is disabled 00:10:15.619 EAL: No shared files mode enabled, IPC is disabled 00:10:15.619 EAL: No shared files mode enabled, IPC is disabled 00:10:15.619 00:10:15.619 real 0m3.028s 00:10:15.619 user 0m1.581s 00:10:15.619 sys 0m1.319s 00:10:15.619 04:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.619 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:15.619 ************************************ 00:10:15.619 END TEST env_vtophys 00:10:15.619 ************************************ 00:10:15.619 04:52:45 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:15.619 04:52:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:15.619 04:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.619 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:15.619 ************************************ 00:10:15.619 START TEST env_pci 00:10:15.619 ************************************ 00:10:15.619 04:52:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:15.619 00:10:15.619 00:10:15.619 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.619 http://cunit.sourceforge.net/ 00:10:15.619 00:10:15.619 00:10:15.619 Suite: pci 00:10:15.619 Test: pci_hook ...[2024-04-27 04:52:45.415846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 116050 has claimed it 00:10:15.619 passed 00:10:15.619 00:10:15.619 Run Summary: Type Total Ran Passed Failed Inactive 00:10:15.619 suites 1 1 n/a 0 0 00:10:15.619 tests 1 1 1 0 0 00:10:15.620 asserts 25 25 25 0 n/a 00:10:15.620 00:10:15.620 Elapsed time = 0.004 seconds 00:10:15.620 EAL: Cannot find device (10000:00:01.0) 00:10:15.620 EAL: Failed to attach device on primary process 00:10:15.620 00:10:15.620 real 0m0.065s 00:10:15.620 user 0m0.017s 00:10:15.620 sys 0m0.049s 00:10:15.620 04:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.620 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:15.620 ************************************ 00:10:15.620 END TEST env_pci 00:10:15.620 ************************************ 00:10:15.620 04:52:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:15.620 04:52:45 -- env/env.sh@15 -- # uname 00:10:15.620 04:52:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:15.620 04:52:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:15.620 04:52:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:15.620 04:52:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:15.620 04:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.620 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:15.620 ************************************ 00:10:15.620 START TEST env_dpdk_post_init 00:10:15.620 ************************************ 00:10:15.620 04:52:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:15.879 EAL: Detected CPU lcores: 10 00:10:15.879 EAL: Detected NUMA nodes: 1 00:10:15.879 EAL: Detected static linkage of DPDK 00:10:15.879 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:15.879 EAL: Selected IOVA mode 'PA' 00:10:15.879 EAL: VFIO support initialized 00:10:15.879 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:15.879 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:10:15.879 Starting DPDK initialization... 00:10:15.879 Starting SPDK post initialization... 00:10:15.879 SPDK NVMe probe 00:10:15.879 Attaching to 0000:00:06.0 00:10:15.879 Attached to 0000:00:06.0 00:10:15.879 Cleaning up... 00:10:16.137 00:10:16.137 real 0m0.264s 00:10:16.137 user 0m0.064s 00:10:16.137 sys 0m0.102s 00:10:16.137 04:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.137 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:16.137 ************************************ 00:10:16.137 END TEST env_dpdk_post_init 00:10:16.137 ************************************ 00:10:16.137 04:52:45 -- env/env.sh@26 -- # uname 00:10:16.137 04:52:45 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:16.137 04:52:45 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:16.137 04:52:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:16.137 04:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.137 04:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:16.137 ************************************ 00:10:16.137 START TEST env_mem_callbacks 00:10:16.137 ************************************ 00:10:16.137 04:52:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:16.137 EAL: Detected CPU lcores: 10 00:10:16.137 EAL: Detected NUMA nodes: 1 00:10:16.137 EAL: Detected static linkage of DPDK 00:10:16.137 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:16.137 EAL: Selected IOVA mode 'PA' 00:10:16.137 EAL: VFIO support initialized 00:10:16.137 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:16.137 00:10:16.137 00:10:16.137 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.137 http://cunit.sourceforge.net/ 00:10:16.137 00:10:16.138 00:10:16.138 Suite: memory 00:10:16.138 Test: test ... 00:10:16.138 register 0x200000200000 2097152 00:10:16.138 malloc 3145728 00:10:16.138 register 0x200000400000 4194304 00:10:16.138 buf 0x200000500000 len 3145728 PASSED 00:10:16.138 malloc 64 00:10:16.138 buf 0x2000004fff40 len 64 PASSED 00:10:16.138 malloc 4194304 00:10:16.138 register 0x200000800000 6291456 00:10:16.138 buf 0x200000a00000 len 4194304 PASSED 00:10:16.138 free 0x200000500000 3145728 00:10:16.138 free 0x2000004fff40 64 00:10:16.138 unregister 0x200000400000 4194304 PASSED 00:10:16.138 free 0x200000a00000 4194304 00:10:16.138 unregister 0x200000800000 6291456 PASSED 00:10:16.138 malloc 8388608 00:10:16.138 register 0x200000400000 10485760 00:10:16.138 buf 0x200000600000 len 8388608 PASSED 00:10:16.138 free 0x200000600000 8388608 00:10:16.138 unregister 0x200000400000 10485760 PASSED 00:10:16.138 passed 00:10:16.138 00:10:16.138 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.138 suites 1 1 n/a 0 0 00:10:16.138 tests 1 1 1 0 0 00:10:16.138 asserts 15 15 15 0 n/a 00:10:16.138 00:10:16.138 Elapsed time = 0.008 seconds 00:10:16.397 00:10:16.397 real 0m0.222s 00:10:16.397 user 0m0.036s 00:10:16.397 sys 0m0.087s 00:10:16.397 04:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.397 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.397 ************************************ 00:10:16.397 END TEST env_mem_callbacks 00:10:16.397 ************************************ 00:10:16.397 00:10:16.397 real 0m4.265s 00:10:16.397 user 0m2.201s 00:10:16.397 sys 0m1.727s 00:10:16.397 04:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.397 ************************************ 00:10:16.397 END TEST env 00:10:16.397 ************************************ 00:10:16.397 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.397 04:52:46 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:16.397 04:52:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:16.397 04:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.397 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.397 ************************************ 00:10:16.397 START TEST rpc 00:10:16.397 ************************************ 00:10:16.397 04:52:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:16.397 * Looking for test storage... 00:10:16.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:16.397 04:52:46 -- rpc/rpc.sh@65 -- # spdk_pid=116180 00:10:16.397 04:52:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.397 04:52:46 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:16.397 04:52:46 -- rpc/rpc.sh@67 -- # waitforlisten 116180 00:10:16.397 04:52:46 -- common/autotest_common.sh@819 -- # '[' -z 116180 ']' 00:10:16.397 04:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.397 04:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.397 04:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.397 04:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.397 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.656 [2024-04-27 04:52:46.305098] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:16.656 [2024-04-27 04:52:46.305400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116180 ] 00:10:16.656 [2024-04-27 04:52:46.464647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.915 [2024-04-27 04:52:46.581380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.916 [2024-04-27 04:52:46.581685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:16.916 [2024-04-27 04:52:46.581725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 116180' to capture a snapshot of events at runtime. 00:10:16.916 [2024-04-27 04:52:46.581757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid116180 for offline analysis/debug. 00:10:16.916 [2024-04-27 04:52:46.581927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.484 04:52:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.484 04:52:47 -- common/autotest_common.sh@852 -- # return 0 00:10:17.484 04:52:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:17.484 04:52:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:17.484 04:52:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:17.484 04:52:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:17.484 04:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:17.484 04:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.484 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.484 ************************************ 00:10:17.484 START TEST rpc_integrity 00:10:17.484 ************************************ 00:10:17.484 04:52:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:10:17.484 04:52:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:17.484 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.484 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.484 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.484 04:52:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:17.484 04:52:47 -- rpc/rpc.sh@13 -- # jq length 00:10:17.484 04:52:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:17.484 04:52:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:17.484 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.484 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.484 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.484 04:52:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:17.484 04:52:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:17.484 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.484 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.743 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.743 04:52:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:17.743 { 00:10:17.743 "name": "Malloc0", 00:10:17.743 "aliases": [ 00:10:17.743 "45d75750-6677-4d8d-b4c6-a8b0df1b4da5" 00:10:17.743 ], 00:10:17.743 "product_name": "Malloc disk", 00:10:17.743 "block_size": 512, 00:10:17.743 "num_blocks": 16384, 00:10:17.743 "uuid": "45d75750-6677-4d8d-b4c6-a8b0df1b4da5", 00:10:17.743 "assigned_rate_limits": { 00:10:17.743 "rw_ios_per_sec": 0, 00:10:17.743 "rw_mbytes_per_sec": 0, 00:10:17.743 "r_mbytes_per_sec": 0, 00:10:17.743 "w_mbytes_per_sec": 0 00:10:17.743 }, 00:10:17.743 "claimed": false, 00:10:17.743 "zoned": false, 00:10:17.743 "supported_io_types": { 00:10:17.743 "read": true, 00:10:17.743 "write": true, 00:10:17.743 "unmap": true, 00:10:17.743 "write_zeroes": true, 00:10:17.743 "flush": true, 00:10:17.743 "reset": true, 00:10:17.743 "compare": false, 00:10:17.743 "compare_and_write": false, 00:10:17.743 "abort": true, 00:10:17.743 "nvme_admin": false, 00:10:17.743 "nvme_io": false 00:10:17.743 }, 00:10:17.743 "memory_domains": [ 00:10:17.743 { 00:10:17.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.743 "dma_device_type": 2 00:10:17.743 } 00:10:17.743 ], 00:10:17.743 "driver_specific": {} 00:10:17.743 } 00:10:17.743 ]' 00:10:17.743 04:52:47 -- rpc/rpc.sh@17 -- # jq length 00:10:17.743 04:52:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:17.743 04:52:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:17.743 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.743 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.743 [2024-04-27 04:52:47.440352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:17.743 [2024-04-27 04:52:47.440510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.743 [2024-04-27 04:52:47.440589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:10:17.743 [2024-04-27 04:52:47.440624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.743 [2024-04-27 04:52:47.443472] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.743 [2024-04-27 04:52:47.443577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:17.743 Passthru0 00:10:17.743 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.743 04:52:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:17.743 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.743 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.744 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.744 04:52:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:17.744 { 00:10:17.744 "name": "Malloc0", 00:10:17.744 "aliases": [ 00:10:17.744 "45d75750-6677-4d8d-b4c6-a8b0df1b4da5" 00:10:17.744 ], 00:10:17.744 "product_name": "Malloc disk", 00:10:17.744 "block_size": 512, 00:10:17.744 "num_blocks": 16384, 00:10:17.744 "uuid": "45d75750-6677-4d8d-b4c6-a8b0df1b4da5", 00:10:17.744 "assigned_rate_limits": { 00:10:17.744 "rw_ios_per_sec": 0, 00:10:17.744 "rw_mbytes_per_sec": 0, 00:10:17.744 "r_mbytes_per_sec": 0, 00:10:17.744 "w_mbytes_per_sec": 0 00:10:17.744 }, 00:10:17.744 "claimed": true, 00:10:17.744 "claim_type": "exclusive_write", 00:10:17.744 "zoned": false, 00:10:17.744 "supported_io_types": { 00:10:17.744 "read": true, 00:10:17.744 "write": true, 00:10:17.744 "unmap": true, 00:10:17.744 "write_zeroes": true, 00:10:17.744 "flush": true, 00:10:17.744 "reset": true, 00:10:17.744 "compare": false, 00:10:17.744 "compare_and_write": false, 00:10:17.744 "abort": true, 00:10:17.744 "nvme_admin": false, 00:10:17.744 "nvme_io": false 00:10:17.744 }, 00:10:17.744 "memory_domains": [ 00:10:17.744 { 00:10:17.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.744 "dma_device_type": 2 00:10:17.744 } 00:10:17.744 ], 00:10:17.744 "driver_specific": {} 00:10:17.744 }, 00:10:17.744 { 00:10:17.744 "name": "Passthru0", 00:10:17.744 "aliases": [ 00:10:17.744 "771d61a4-264e-5281-805c-e6447c7ab91a" 00:10:17.744 ], 00:10:17.744 "product_name": "passthru", 00:10:17.744 "block_size": 512, 00:10:17.744 "num_blocks": 16384, 00:10:17.744 "uuid": "771d61a4-264e-5281-805c-e6447c7ab91a", 00:10:17.744 "assigned_rate_limits": { 00:10:17.744 "rw_ios_per_sec": 0, 00:10:17.744 "rw_mbytes_per_sec": 0, 00:10:17.744 "r_mbytes_per_sec": 0, 00:10:17.744 "w_mbytes_per_sec": 0 00:10:17.744 }, 00:10:17.744 "claimed": false, 00:10:17.744 "zoned": false, 00:10:17.744 "supported_io_types": { 00:10:17.744 "read": true, 00:10:17.744 "write": true, 00:10:17.744 "unmap": true, 00:10:17.744 "write_zeroes": true, 00:10:17.744 "flush": true, 00:10:17.744 "reset": true, 00:10:17.744 "compare": false, 00:10:17.744 "compare_and_write": false, 00:10:17.744 "abort": true, 00:10:17.744 "nvme_admin": false, 00:10:17.744 "nvme_io": false 00:10:17.744 }, 00:10:17.744 "memory_domains": [ 00:10:17.744 { 00:10:17.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.744 "dma_device_type": 2 00:10:17.744 } 00:10:17.744 ], 00:10:17.744 "driver_specific": { 00:10:17.744 "passthru": { 00:10:17.744 "name": "Passthru0", 00:10:17.744 "base_bdev_name": "Malloc0" 00:10:17.744 } 00:10:17.744 } 00:10:17.744 } 00:10:17.744 ]' 00:10:17.744 04:52:47 -- rpc/rpc.sh@21 -- # jq length 00:10:17.744 04:52:47 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:17.744 04:52:47 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:17.744 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.744 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.744 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.744 04:52:47 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:17.744 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.744 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.744 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.744 04:52:47 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:17.744 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.744 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.744 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.744 04:52:47 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:17.744 04:52:47 -- rpc/rpc.sh@26 -- # jq length 00:10:17.744 04:52:47 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:17.744 00:10:17.744 real 0m0.299s 00:10:17.744 user 0m0.207s 00:10:17.744 sys 0m0.022s 00:10:17.744 04:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.744 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.744 ************************************ 00:10:17.744 END TEST rpc_integrity 00:10:17.744 ************************************ 00:10:17.744 04:52:47 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:17.744 04:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:17.744 04:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.744 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 ************************************ 00:10:18.003 START TEST rpc_plugins 00:10:18.003 ************************************ 00:10:18.003 04:52:47 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:10:18.003 04:52:47 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:18.003 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.003 04:52:47 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:18.003 04:52:47 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:18.003 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.003 04:52:47 -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:18.003 { 00:10:18.003 "name": "Malloc1", 00:10:18.003 "aliases": [ 00:10:18.003 "acd9b180-c756-471b-99c9-d1dc84617529" 00:10:18.003 ], 00:10:18.003 "product_name": "Malloc disk", 00:10:18.003 "block_size": 4096, 00:10:18.003 "num_blocks": 256, 00:10:18.003 "uuid": "acd9b180-c756-471b-99c9-d1dc84617529", 00:10:18.003 "assigned_rate_limits": { 00:10:18.003 "rw_ios_per_sec": 0, 00:10:18.003 "rw_mbytes_per_sec": 0, 00:10:18.003 "r_mbytes_per_sec": 0, 00:10:18.003 "w_mbytes_per_sec": 0 00:10:18.003 }, 00:10:18.003 "claimed": false, 00:10:18.003 "zoned": false, 00:10:18.003 "supported_io_types": { 00:10:18.003 "read": true, 00:10:18.003 "write": true, 00:10:18.003 "unmap": true, 00:10:18.003 "write_zeroes": true, 00:10:18.003 "flush": true, 00:10:18.003 "reset": true, 00:10:18.003 "compare": false, 00:10:18.003 "compare_and_write": false, 00:10:18.003 "abort": true, 00:10:18.003 "nvme_admin": false, 00:10:18.003 "nvme_io": false 00:10:18.003 }, 00:10:18.003 "memory_domains": [ 00:10:18.003 { 00:10:18.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.003 "dma_device_type": 2 00:10:18.003 } 00:10:18.003 ], 00:10:18.003 "driver_specific": {} 00:10:18.003 } 00:10:18.003 ]' 00:10:18.003 04:52:47 -- rpc/rpc.sh@32 -- # jq length 00:10:18.003 04:52:47 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:18.003 04:52:47 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:18.003 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.003 04:52:47 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:18.003 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.003 04:52:47 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:18.003 04:52:47 -- rpc/rpc.sh@36 -- # jq length 00:10:18.003 04:52:47 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:18.003 00:10:18.003 real 0m0.149s 00:10:18.003 user 0m0.109s 00:10:18.003 sys 0m0.005s 00:10:18.003 04:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.003 ************************************ 00:10:18.003 END TEST rpc_plugins 00:10:18.003 ************************************ 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 04:52:47 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:18.003 04:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:18.003 04:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.003 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 ************************************ 00:10:18.003 START TEST rpc_trace_cmd_test 00:10:18.003 ************************************ 00:10:18.004 04:52:47 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:10:18.004 04:52:47 -- rpc/rpc.sh@40 -- # local info 00:10:18.004 04:52:47 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:18.004 04:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.004 04:52:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.004 04:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.004 04:52:47 -- rpc/rpc.sh@42 -- # info='{ 00:10:18.004 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid116180", 00:10:18.004 "tpoint_group_mask": "0x8", 00:10:18.004 "iscsi_conn": { 00:10:18.004 "mask": "0x2", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "scsi": { 00:10:18.004 "mask": "0x4", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "bdev": { 00:10:18.004 "mask": "0x8", 00:10:18.004 "tpoint_mask": "0xffffffffffffffff" 00:10:18.004 }, 00:10:18.004 "nvmf_rdma": { 00:10:18.004 "mask": "0x10", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "nvmf_tcp": { 00:10:18.004 "mask": "0x20", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "ftl": { 00:10:18.004 "mask": "0x40", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "blobfs": { 00:10:18.004 "mask": "0x80", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "dsa": { 00:10:18.004 "mask": "0x200", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "thread": { 00:10:18.004 "mask": "0x400", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "nvme_pcie": { 00:10:18.004 "mask": "0x800", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "iaa": { 00:10:18.004 "mask": "0x1000", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "nvme_tcp": { 00:10:18.004 "mask": "0x2000", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 }, 00:10:18.004 "bdev_nvme": { 00:10:18.004 "mask": "0x4000", 00:10:18.004 "tpoint_mask": "0x0" 00:10:18.004 } 00:10:18.004 }' 00:10:18.004 04:52:47 -- rpc/rpc.sh@43 -- # jq length 00:10:18.263 04:52:47 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:10:18.263 04:52:47 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:18.263 04:52:47 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:18.263 04:52:47 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:18.263 04:52:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:18.263 04:52:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:18.263 04:52:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:18.263 04:52:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:18.263 04:52:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:18.263 00:10:18.263 real 0m0.264s 00:10:18.263 user 0m0.222s 00:10:18.263 sys 0m0.035s 00:10:18.263 04:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.263 ************************************ 00:10:18.263 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.263 END TEST rpc_trace_cmd_test 00:10:18.263 ************************************ 00:10:18.263 04:52:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:18.263 04:52:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:18.263 04:52:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:18.263 04:52:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:18.263 04:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.263 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 ************************************ 00:10:18.523 START TEST rpc_daemon_integrity 00:10:18.523 ************************************ 00:10:18.523 04:52:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:10:18.523 04:52:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:18.523 04:52:48 -- rpc/rpc.sh@13 -- # jq length 00:10:18.523 04:52:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:18.523 04:52:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:18.523 04:52:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:18.523 { 00:10:18.523 "name": "Malloc2", 00:10:18.523 "aliases": [ 00:10:18.523 "3e7e5602-e154-4e6b-8395-260dcf688966" 00:10:18.523 ], 00:10:18.523 "product_name": "Malloc disk", 00:10:18.523 "block_size": 512, 00:10:18.523 "num_blocks": 16384, 00:10:18.523 "uuid": "3e7e5602-e154-4e6b-8395-260dcf688966", 00:10:18.523 "assigned_rate_limits": { 00:10:18.523 "rw_ios_per_sec": 0, 00:10:18.523 "rw_mbytes_per_sec": 0, 00:10:18.523 "r_mbytes_per_sec": 0, 00:10:18.523 "w_mbytes_per_sec": 0 00:10:18.523 }, 00:10:18.523 "claimed": false, 00:10:18.523 "zoned": false, 00:10:18.523 "supported_io_types": { 00:10:18.523 "read": true, 00:10:18.523 "write": true, 00:10:18.523 "unmap": true, 00:10:18.523 "write_zeroes": true, 00:10:18.523 "flush": true, 00:10:18.523 "reset": true, 00:10:18.523 "compare": false, 00:10:18.523 "compare_and_write": false, 00:10:18.523 "abort": true, 00:10:18.523 "nvme_admin": false, 00:10:18.523 "nvme_io": false 00:10:18.523 }, 00:10:18.523 "memory_domains": [ 00:10:18.523 { 00:10:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.523 "dma_device_type": 2 00:10:18.523 } 00:10:18.523 ], 00:10:18.523 "driver_specific": {} 00:10:18.523 } 00:10:18.523 ]' 00:10:18.523 04:52:48 -- rpc/rpc.sh@17 -- # jq length 00:10:18.523 04:52:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:18.523 04:52:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 [2024-04-27 04:52:48.316160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:18.523 [2024-04-27 04:52:48.316298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.523 [2024-04-27 04:52:48.316347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:18.523 [2024-04-27 04:52:48.316418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.523 [2024-04-27 04:52:48.319209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.523 [2024-04-27 04:52:48.319305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:18.523 Passthru0 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:18.523 { 00:10:18.523 "name": "Malloc2", 00:10:18.523 "aliases": [ 00:10:18.523 "3e7e5602-e154-4e6b-8395-260dcf688966" 00:10:18.523 ], 00:10:18.523 "product_name": "Malloc disk", 00:10:18.523 "block_size": 512, 00:10:18.523 "num_blocks": 16384, 00:10:18.523 "uuid": "3e7e5602-e154-4e6b-8395-260dcf688966", 00:10:18.523 "assigned_rate_limits": { 00:10:18.523 "rw_ios_per_sec": 0, 00:10:18.523 "rw_mbytes_per_sec": 0, 00:10:18.523 "r_mbytes_per_sec": 0, 00:10:18.523 "w_mbytes_per_sec": 0 00:10:18.523 }, 00:10:18.523 "claimed": true, 00:10:18.523 "claim_type": "exclusive_write", 00:10:18.523 "zoned": false, 00:10:18.523 "supported_io_types": { 00:10:18.523 "read": true, 00:10:18.523 "write": true, 00:10:18.523 "unmap": true, 00:10:18.523 "write_zeroes": true, 00:10:18.523 "flush": true, 00:10:18.523 "reset": true, 00:10:18.523 "compare": false, 00:10:18.523 "compare_and_write": false, 00:10:18.523 "abort": true, 00:10:18.523 "nvme_admin": false, 00:10:18.523 "nvme_io": false 00:10:18.523 }, 00:10:18.523 "memory_domains": [ 00:10:18.523 { 00:10:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.523 "dma_device_type": 2 00:10:18.523 } 00:10:18.523 ], 00:10:18.523 "driver_specific": {} 00:10:18.523 }, 00:10:18.523 { 00:10:18.523 "name": "Passthru0", 00:10:18.523 "aliases": [ 00:10:18.523 "e4793216-1102-51dd-9760-90eafd419d71" 00:10:18.523 ], 00:10:18.523 "product_name": "passthru", 00:10:18.523 "block_size": 512, 00:10:18.523 "num_blocks": 16384, 00:10:18.523 "uuid": "e4793216-1102-51dd-9760-90eafd419d71", 00:10:18.523 "assigned_rate_limits": { 00:10:18.523 "rw_ios_per_sec": 0, 00:10:18.523 "rw_mbytes_per_sec": 0, 00:10:18.523 "r_mbytes_per_sec": 0, 00:10:18.523 "w_mbytes_per_sec": 0 00:10:18.523 }, 00:10:18.523 "claimed": false, 00:10:18.523 "zoned": false, 00:10:18.523 "supported_io_types": { 00:10:18.523 "read": true, 00:10:18.523 "write": true, 00:10:18.523 "unmap": true, 00:10:18.523 "write_zeroes": true, 00:10:18.523 "flush": true, 00:10:18.523 "reset": true, 00:10:18.523 "compare": false, 00:10:18.523 "compare_and_write": false, 00:10:18.523 "abort": true, 00:10:18.523 "nvme_admin": false, 00:10:18.523 "nvme_io": false 00:10:18.523 }, 00:10:18.523 "memory_domains": [ 00:10:18.523 { 00:10:18.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.523 "dma_device_type": 2 00:10:18.523 } 00:10:18.523 ], 00:10:18.523 "driver_specific": { 00:10:18.523 "passthru": { 00:10:18.523 "name": "Passthru0", 00:10:18.523 "base_bdev_name": "Malloc2" 00:10:18.523 } 00:10:18.523 } 00:10:18.523 } 00:10:18.523 ]' 00:10:18.523 04:52:48 -- rpc/rpc.sh@21 -- # jq length 00:10:18.523 04:52:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:18.523 04:52:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:18.523 04:52:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.523 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.523 04:52:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.523 04:52:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:18.523 04:52:48 -- rpc/rpc.sh@26 -- # jq length 00:10:18.783 04:52:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:18.783 00:10:18.783 real 0m0.296s 00:10:18.783 user 0m0.206s 00:10:18.783 sys 0m0.028s 00:10:18.783 04:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.783 ************************************ 00:10:18.783 END TEST rpc_daemon_integrity 00:10:18.783 04:52:48 -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 ************************************ 00:10:18.783 04:52:48 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:18.783 04:52:48 -- rpc/rpc.sh@84 -- # killprocess 116180 00:10:18.783 04:52:48 -- common/autotest_common.sh@926 -- # '[' -z 116180 ']' 00:10:18.783 04:52:48 -- common/autotest_common.sh@930 -- # kill -0 116180 00:10:18.783 04:52:48 -- common/autotest_common.sh@931 -- # uname 00:10:18.783 04:52:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.783 04:52:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116180 00:10:18.783 04:52:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:18.783 killing process with pid 116180 00:10:18.783 04:52:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:18.783 04:52:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116180' 00:10:18.783 04:52:48 -- common/autotest_common.sh@945 -- # kill 116180 00:10:18.783 04:52:48 -- common/autotest_common.sh@950 -- # wait 116180 00:10:19.352 00:10:19.352 real 0m3.046s 00:10:19.352 user 0m3.677s 00:10:19.352 sys 0m0.874s 00:10:19.352 04:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.352 ************************************ 00:10:19.352 END TEST rpc 00:10:19.352 ************************************ 00:10:19.352 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.352 04:52:49 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:19.352 04:52:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:19.352 04:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.352 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.352 ************************************ 00:10:19.352 START TEST rpc_client 00:10:19.352 ************************************ 00:10:19.352 04:52:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:19.612 * Looking for test storage... 00:10:19.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:19.612 04:52:49 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:19.612 OK 00:10:19.612 04:52:49 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:19.612 00:10:19.612 real 0m0.125s 00:10:19.612 user 0m0.090s 00:10:19.612 sys 0m0.047s 00:10:19.612 04:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.612 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.612 ************************************ 00:10:19.612 END TEST rpc_client 00:10:19.612 ************************************ 00:10:19.612 04:52:49 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:19.612 04:52:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:19.612 04:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.612 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.612 ************************************ 00:10:19.612 START TEST json_config 00:10:19.612 ************************************ 00:10:19.612 04:52:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:19.612 04:52:49 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.612 04:52:49 -- nvmf/common.sh@7 -- # uname -s 00:10:19.612 04:52:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.612 04:52:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.612 04:52:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.612 04:52:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.612 04:52:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.612 04:52:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.612 04:52:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.612 04:52:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.612 04:52:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.612 04:52:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.612 04:52:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc4cf2f-232e-4563-8d77-cf56e9bf5645 00:10:19.612 04:52:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc4cf2f-232e-4563-8d77-cf56e9bf5645 00:10:19.612 04:52:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.612 04:52:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.612 04:52:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:19.612 04:52:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.612 04:52:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.612 04:52:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.612 04:52:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.612 04:52:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:19.612 04:52:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:19.612 04:52:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:19.612 04:52:49 -- paths/export.sh@5 -- # export PATH 00:10:19.612 04:52:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:19.612 04:52:49 -- nvmf/common.sh@46 -- # : 0 00:10:19.612 04:52:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:19.612 04:52:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:19.612 04:52:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:19.612 04:52:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.612 04:52:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.612 04:52:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:19.612 04:52:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:19.612 04:52:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:19.612 04:52:49 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:10:19.612 04:52:49 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:10:19.612 04:52:49 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:10:19.612 04:52:49 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:19.612 04:52:49 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:10:19.612 04:52:49 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:10:19.613 04:52:49 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:19.613 04:52:49 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:10:19.613 04:52:49 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:19.613 04:52:49 -- json_config/json_config.sh@32 -- # declare -A app_params 00:10:19.613 04:52:49 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:19.613 04:52:49 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:10:19.613 04:52:49 -- json_config/json_config.sh@43 -- # last_event_id=0 00:10:19.613 04:52:49 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:19.613 INFO: JSON configuration test init 00:10:19.613 04:52:49 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:10:19.613 04:52:49 -- json_config/json_config.sh@420 -- # json_config_test_init 00:10:19.613 04:52:49 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:10:19.613 04:52:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:19.613 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.613 04:52:49 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:10:19.613 04:52:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:19.613 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.613 04:52:49 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:10:19.613 04:52:49 -- json_config/json_config.sh@98 -- # local app=target 00:10:19.613 04:52:49 -- json_config/json_config.sh@99 -- # shift 00:10:19.613 04:52:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:10:19.613 04:52:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:10:19.613 04:52:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:10:19.613 04:52:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:19.613 04:52:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:19.613 04:52:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=116448 00:10:19.613 Waiting for target to run... 00:10:19.613 04:52:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:10:19.613 04:52:49 -- json_config/json_config.sh@114 -- # waitforlisten 116448 /var/tmp/spdk_tgt.sock 00:10:19.613 04:52:49 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:19.613 04:52:49 -- common/autotest_common.sh@819 -- # '[' -z 116448 ']' 00:10:19.613 04:52:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:19.613 04:52:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:19.613 04:52:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:19.613 04:52:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.613 04:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.872 [2024-04-27 04:52:49.582547] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:19.873 [2024-04-27 04:52:49.582814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116448 ] 00:10:20.444 [2024-04-27 04:52:50.178401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.444 [2024-04-27 04:52:50.267935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:20.444 [2024-04-27 04:52:50.268287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.702 04:52:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.702 00:10:20.702 04:52:50 -- common/autotest_common.sh@852 -- # return 0 00:10:20.702 04:52:50 -- json_config/json_config.sh@115 -- # echo '' 00:10:20.702 04:52:50 -- json_config/json_config.sh@322 -- # create_accel_config 00:10:20.702 04:52:50 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:10:20.702 04:52:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:20.702 04:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:20.702 04:52:50 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:10:20.702 04:52:50 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:10:20.702 04:52:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:20.702 04:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:20.702 04:52:50 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:20.702 04:52:50 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:10:20.702 04:52:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:21.269 04:52:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:10:21.269 04:52:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:10:21.269 04:52:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:21.269 04:52:50 -- common/autotest_common.sh@10 -- # set +x 00:10:21.269 04:52:50 -- json_config/json_config.sh@48 -- # local ret=0 00:10:21.269 04:52:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:21.269 04:52:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:10:21.269 04:52:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:21.269 04:52:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:21.269 04:52:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:21.528 04:52:51 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:21.528 04:52:51 -- json_config/json_config.sh@51 -- # local get_types 00:10:21.528 04:52:51 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:21.528 04:52:51 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:10:21.528 04:52:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:21.528 04:52:51 -- common/autotest_common.sh@10 -- # set +x 00:10:21.528 04:52:51 -- json_config/json_config.sh@58 -- # return 0 00:10:21.528 04:52:51 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:10:21.528 04:52:51 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:10:21.528 04:52:51 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:10:21.528 04:52:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:21.528 04:52:51 -- common/autotest_common.sh@10 -- # set +x 00:10:21.528 04:52:51 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:10:21.528 04:52:51 -- json_config/json_config.sh@160 -- # local expected_notifications 00:10:21.528 04:52:51 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:10:21.528 04:52:51 -- json_config/json_config.sh@164 -- # get_notifications 00:10:21.528 04:52:51 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:10:21.528 04:52:51 -- json_config/json_config.sh@64 -- # IFS=: 00:10:21.528 04:52:51 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:21.528 04:52:51 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:10:21.528 04:52:51 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:21.528 04:52:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:21.787 04:52:51 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:10:21.787 04:52:51 -- json_config/json_config.sh@64 -- # IFS=: 00:10:21.787 04:52:51 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:21.787 04:52:51 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:10:21.787 04:52:51 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:10:21.787 04:52:51 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:21.787 04:52:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:22.045 Nvme0n1p0 Nvme0n1p1 00:10:22.045 04:52:51 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:22.045 04:52:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:22.304 [2024-04-27 04:52:52.026037] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:22.304 [2024-04-27 04:52:52.026201] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:22.304 00:10:22.304 04:52:52 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:22.304 04:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:22.563 Malloc3 00:10:22.563 04:52:52 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:22.563 04:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:22.822 [2024-04-27 04:52:52.530286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:22.822 [2024-04-27 04:52:52.530469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.822 [2024-04-27 04:52:52.530517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.822 [2024-04-27 04:52:52.530562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.822 [2024-04-27 04:52:52.533924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.822 [2024-04-27 04:52:52.534032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:22.822 PTBdevFromMalloc3 00:10:22.822 04:52:52 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:22.822 04:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:23.081 Null0 00:10:23.081 04:52:52 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:23.081 04:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:23.340 Malloc0 00:10:23.340 04:52:53 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:23.340 04:52:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:23.340 Malloc1 00:10:23.598 04:52:53 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:23.598 04:52:53 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:23.857 102400+0 records in 00:10:23.857 102400+0 records out 00:10:23.857 104857600 bytes (105 MB, 100 MiB) copied, 0.367537 s, 285 MB/s 00:10:23.857 04:52:53 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:23.857 04:52:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:24.115 aio_disk 00:10:24.115 04:52:53 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:24.115 04:52:53 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:24.115 04:52:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:24.373 766ebd3e-e07e-4160-a929-0d1d9a844cad 00:10:24.373 04:52:54 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:24.373 04:52:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:24.373 04:52:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:24.631 04:52:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:24.631 04:52:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:24.889 04:52:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:24.889 04:52:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:25.147 04:52:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:25.147 04:52:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:25.406 04:52:55 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:10:25.406 04:52:55 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:10:25.406 04:52:55 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 00:10:25.406 04:52:55 -- json_config/json_config.sh@70 -- # local events_to_check 00:10:25.406 04:52:55 -- json_config/json_config.sh@71 -- # local recorded_events 00:10:25.406 04:52:55 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:25.406 04:52:55 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 00:10:25.406 04:52:55 -- json_config/json_config.sh@74 -- # sort 00:10:25.406 04:52:55 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:10:25.406 04:52:55 -- json_config/json_config.sh@75 -- # get_notifications 00:10:25.406 04:52:55 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:10:25.406 04:52:55 -- json_config/json_config.sh@75 -- # sort 00:10:25.406 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.406 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.406 04:52:55 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:10:25.406 04:52:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:25.406 04:52:55 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@65 -- # echo bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # IFS=: 00:10:25.664 04:52:55 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:10:25.664 04:52:55 -- json_config/json_config.sh@77 -- # [[ bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 bdev_register:aio_disk bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\2\d\5\e\3\9\d\-\2\1\2\8\-\4\a\1\2\-\a\a\a\f\-\8\4\9\0\3\a\7\8\c\1\0\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\3\1\8\3\9\a\9\-\c\1\d\e\-\4\8\7\e\-\8\b\7\4\-\6\7\4\b\7\f\4\3\e\3\a\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\7\a\7\2\2\8\-\5\e\9\9\-\4\b\2\7\-\8\c\c\7\-\6\7\5\7\3\b\3\a\0\7\2\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\3\4\8\2\d\5\4\-\b\4\7\b\-\4\2\d\b\-\8\7\c\8\-\d\9\0\b\f\b\4\d\c\c\f\1 ]] 00:10:25.664 04:52:55 -- json_config/json_config.sh@89 -- # cat 00:10:25.664 04:52:55 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 bdev_register:aio_disk bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 00:10:25.664 Expected events matched: 00:10:25.664 bdev_register:62d5e39d-2128-4a12-aaaf-84903a78c107 00:10:25.664 bdev_register:631839a9-c1de-487e-8b74-674b7f43e3a1 00:10:25.664 bdev_register:Malloc0 00:10:25.664 bdev_register:Malloc0p0 00:10:25.664 bdev_register:Malloc0p1 00:10:25.664 bdev_register:Malloc0p2 00:10:25.664 bdev_register:Malloc1 00:10:25.664 bdev_register:Malloc3 00:10:25.664 bdev_register:Null0 00:10:25.664 bdev_register:Nvme0n1 00:10:25.664 bdev_register:Nvme0n1p0 00:10:25.664 bdev_register:Nvme0n1p1 00:10:25.664 bdev_register:PTBdevFromMalloc3 00:10:25.664 bdev_register:a17a7228-5e99-4b27-8cc7-67573b3a0727 00:10:25.664 bdev_register:aio_disk 00:10:25.664 bdev_register:f3482d54-b47b-42db-87c8-d90bfb4dccf1 00:10:25.664 04:52:55 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:10:25.664 04:52:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:25.664 04:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:25.664 04:52:55 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:10:25.664 04:52:55 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:10:25.664 04:52:55 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:10:25.664 04:52:55 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:10:25.664 04:52:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:25.664 04:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:25.664 04:52:55 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:10:25.664 04:52:55 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:25.664 04:52:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:25.922 MallocBdevForConfigChangeCheck 00:10:25.922 04:52:55 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:10:25.922 04:52:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:25.922 04:52:55 -- common/autotest_common.sh@10 -- # set +x 00:10:25.922 04:52:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:10:25.922 04:52:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:26.488 INFO: shutting down applications... 00:10:26.488 04:52:56 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:10:26.488 04:52:56 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:10:26.488 04:52:56 -- json_config/json_config.sh@431 -- # json_config_clear target 00:10:26.488 04:52:56 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:10:26.488 04:52:56 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:26.488 [2024-04-27 04:52:56.270899] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:26.746 Calling clear_vhost_scsi_subsystem 00:10:26.746 Calling clear_iscsi_subsystem 00:10:26.746 Calling clear_vhost_blk_subsystem 00:10:26.746 Calling clear_nbd_subsystem 00:10:26.746 Calling clear_nvmf_subsystem 00:10:26.746 Calling clear_bdev_subsystem 00:10:26.746 Calling clear_accel_subsystem 00:10:26.746 Calling clear_iobuf_subsystem 00:10:26.746 Calling clear_sock_subsystem 00:10:26.746 Calling clear_vmd_subsystem 00:10:26.746 Calling clear_scheduler_subsystem 00:10:26.746 04:52:56 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:26.746 04:52:56 -- json_config/json_config.sh@396 -- # count=100 00:10:26.746 04:52:56 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:10:26.746 04:52:56 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:26.746 04:52:56 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:26.746 04:52:56 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:27.004 04:52:56 -- json_config/json_config.sh@398 -- # break 00:10:27.004 04:52:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:10:27.004 04:52:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:10:27.004 04:52:56 -- json_config/json_config.sh@120 -- # local app=target 00:10:27.004 04:52:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:10:27.004 04:52:56 -- json_config/json_config.sh@124 -- # [[ -n 116448 ]] 00:10:27.004 04:52:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 116448 00:10:27.004 04:52:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:10:27.004 04:52:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:10:27.004 04:52:56 -- json_config/json_config.sh@130 -- # kill -0 116448 00:10:27.004 04:52:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:10:27.570 04:52:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:10:27.570 04:52:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:10:27.570 04:52:57 -- json_config/json_config.sh@130 -- # kill -0 116448 00:10:27.570 04:52:57 -- json_config/json_config.sh@134 -- # sleep 0.5 00:10:28.157 04:52:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:10:28.157 04:52:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:10:28.157 04:52:57 -- json_config/json_config.sh@130 -- # kill -0 116448 00:10:28.157 04:52:57 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:10:28.157 04:52:57 -- json_config/json_config.sh@132 -- # break 00:10:28.157 SPDK target shutdown done 00:10:28.157 04:52:57 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:10:28.157 04:52:57 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:10:28.157 INFO: relaunching applications... 00:10:28.157 04:52:57 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:10:28.157 04:52:57 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:28.157 04:52:57 -- json_config/json_config.sh@98 -- # local app=target 00:10:28.157 04:52:57 -- json_config/json_config.sh@99 -- # shift 00:10:28.157 04:52:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:10:28.157 04:52:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:10:28.157 04:52:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:10:28.157 04:52:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:28.157 04:52:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:10:28.157 04:52:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=116707 00:10:28.157 Waiting for target to run... 00:10:28.157 04:52:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:10:28.157 04:52:57 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:28.157 04:52:57 -- json_config/json_config.sh@114 -- # waitforlisten 116707 /var/tmp/spdk_tgt.sock 00:10:28.157 04:52:57 -- common/autotest_common.sh@819 -- # '[' -z 116707 ']' 00:10:28.157 04:52:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:28.157 04:52:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:28.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:28.157 04:52:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:28.157 04:52:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:28.157 04:52:57 -- common/autotest_common.sh@10 -- # set +x 00:10:28.157 [2024-04-27 04:52:57.912962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:28.157 [2024-04-27 04:52:57.913245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116707 ] 00:10:29.093 [2024-04-27 04:52:58.648588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.093 [2024-04-27 04:52:58.749402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.093 [2024-04-27 04:52:58.749757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.093 [2024-04-27 04:52:58.918051] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:29.093 [2024-04-27 04:52:58.918200] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:29.093 [2024-04-27 04:52:58.926027] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:29.093 [2024-04-27 04:52:58.926131] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:29.093 [2024-04-27 04:52:58.934083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:29.093 [2024-04-27 04:52:58.934207] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:29.093 [2024-04-27 04:52:58.934265] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:29.352 [2024-04-27 04:52:59.021853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:29.352 [2024-04-27 04:52:59.021987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.352 [2024-04-27 04:52:59.022040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.352 [2024-04-27 04:52:59.022086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.352 [2024-04-27 04:52:59.022755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.352 [2024-04-27 04:52:59.022867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:29.920 04:52:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:29.920 04:52:59 -- common/autotest_common.sh@852 -- # return 0 00:10:29.920 00:10:29.920 04:52:59 -- json_config/json_config.sh@115 -- # echo '' 00:10:29.920 INFO: Checking if target configuration is the same... 00:10:29.920 04:52:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:10:29.920 04:52:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:29.920 04:52:59 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:29.920 04:52:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:10:29.920 04:52:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:29.920 + '[' 2 -ne 2 ']' 00:10:29.920 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:29.920 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:29.920 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:29.920 +++ basename /dev/fd/62 00:10:29.920 ++ mktemp /tmp/62.XXX 00:10:29.920 + tmp_file_1=/tmp/62.VmX 00:10:29.920 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:29.920 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:29.920 + tmp_file_2=/tmp/spdk_tgt_config.json.2Z2 00:10:29.920 + ret=0 00:10:29.920 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:30.178 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:30.178 + diff -u /tmp/62.VmX /tmp/spdk_tgt_config.json.2Z2 00:10:30.178 + echo 'INFO: JSON config files are the same' 00:10:30.178 INFO: JSON config files are the same 00:10:30.178 + rm /tmp/62.VmX /tmp/spdk_tgt_config.json.2Z2 00:10:30.178 + exit 0 00:10:30.178 INFO: changing configuration and checking if this can be detected... 00:10:30.178 04:53:00 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:10:30.178 04:53:00 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:30.178 04:53:00 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:30.178 04:53:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:30.435 04:53:00 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.435 04:53:00 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:10:30.435 04:53:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:30.435 + '[' 2 -ne 2 ']' 00:10:30.435 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:30.435 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:30.435 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:30.435 +++ basename /dev/fd/62 00:10:30.435 ++ mktemp /tmp/62.XXX 00:10:30.435 + tmp_file_1=/tmp/62.cWL 00:10:30.435 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.435 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:30.435 + tmp_file_2=/tmp/spdk_tgt_config.json.ggF 00:10:30.435 + ret=0 00:10:30.436 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.001 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.001 + diff -u /tmp/62.cWL /tmp/spdk_tgt_config.json.ggF 00:10:31.001 + ret=1 00:10:31.001 + echo '=== Start of file: /tmp/62.cWL ===' 00:10:31.001 + cat /tmp/62.cWL 00:10:31.001 + echo '=== End of file: /tmp/62.cWL ===' 00:10:31.001 + echo '' 00:10:31.001 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ggF ===' 00:10:31.001 + cat /tmp/spdk_tgt_config.json.ggF 00:10:31.001 + echo '=== End of file: /tmp/spdk_tgt_config.json.ggF ===' 00:10:31.001 + echo '' 00:10:31.001 + rm /tmp/62.cWL /tmp/spdk_tgt_config.json.ggF 00:10:31.001 + exit 1 00:10:31.001 INFO: configuration change detected. 00:10:31.001 04:53:00 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:10:31.001 04:53:00 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:10:31.001 04:53:00 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:10:31.002 04:53:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:31.002 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:31.002 04:53:00 -- json_config/json_config.sh@360 -- # local ret=0 00:10:31.002 04:53:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:10:31.002 04:53:00 -- json_config/json_config.sh@370 -- # [[ -n 116707 ]] 00:10:31.002 04:53:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:10:31.002 04:53:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:10:31.002 04:53:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:31.002 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:31.002 04:53:00 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:10:31.002 04:53:00 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:31.002 04:53:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:31.261 04:53:01 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:31.261 04:53:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:31.519 04:53:01 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:31.519 04:53:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:31.777 04:53:01 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:31.777 04:53:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:32.036 04:53:01 -- json_config/json_config.sh@246 -- # uname -s 00:10:32.036 04:53:01 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:10:32.036 04:53:01 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:10:32.036 04:53:01 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:10:32.036 04:53:01 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:10:32.036 04:53:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:32.036 04:53:01 -- common/autotest_common.sh@10 -- # set +x 00:10:32.036 04:53:01 -- json_config/json_config.sh@376 -- # killprocess 116707 00:10:32.036 04:53:01 -- common/autotest_common.sh@926 -- # '[' -z 116707 ']' 00:10:32.036 04:53:01 -- common/autotest_common.sh@930 -- # kill -0 116707 00:10:32.036 04:53:01 -- common/autotest_common.sh@931 -- # uname 00:10:32.036 04:53:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:32.036 04:53:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116707 00:10:32.036 04:53:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:32.036 04:53:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:32.036 04:53:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116707' 00:10:32.036 killing process with pid 116707 00:10:32.036 04:53:01 -- common/autotest_common.sh@945 -- # kill 116707 00:10:32.036 04:53:01 -- common/autotest_common.sh@950 -- # wait 116707 00:10:32.973 04:53:02 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:32.973 04:53:02 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:10:32.973 04:53:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:32.973 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.973 INFO: Success 00:10:32.973 04:53:02 -- json_config/json_config.sh@381 -- # return 0 00:10:32.973 04:53:02 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:10:32.973 00:10:32.973 real 0m13.156s 00:10:32.973 user 0m18.814s 00:10:32.973 sys 0m2.984s 00:10:32.973 ************************************ 00:10:32.974 END TEST json_config 00:10:32.974 ************************************ 00:10:32.974 04:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.974 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.974 04:53:02 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:32.974 04:53:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:32.974 04:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.974 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.974 ************************************ 00:10:32.974 START TEST json_config_extra_key 00:10:32.974 ************************************ 00:10:32.974 04:53:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.974 04:53:02 -- nvmf/common.sh@7 -- # uname -s 00:10:32.974 04:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.974 04:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.974 04:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.974 04:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.974 04:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.974 04:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.974 04:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.974 04:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.974 04:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.974 04:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.974 04:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0eeaf921-72f1-4d86-9986-f578a8b3a010 00:10:32.974 04:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=0eeaf921-72f1-4d86-9986-f578a8b3a010 00:10:32.974 04:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.974 04:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.974 04:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:32.974 04:53:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.974 04:53:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.974 04:53:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.974 04:53:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.974 04:53:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:32.974 04:53:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:32.974 04:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:32.974 04:53:02 -- paths/export.sh@5 -- # export PATH 00:10:32.974 04:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:32.974 04:53:02 -- nvmf/common.sh@46 -- # : 0 00:10:32.974 04:53:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:32.974 04:53:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:32.974 04:53:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:32.974 04:53:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.974 04:53:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.974 04:53:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:32.974 04:53:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:32.974 04:53:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:10:32.974 INFO: launching applications... 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=116886 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:32.974 Waiting for target to run... 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:10:32.974 04:53:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 116886 /var/tmp/spdk_tgt.sock 00:10:32.974 04:53:02 -- common/autotest_common.sh@819 -- # '[' -z 116886 ']' 00:10:32.974 04:53:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:32.974 04:53:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:32.974 04:53:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:32.974 04:53:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:32.974 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.974 [2024-04-27 04:53:02.805211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:32.974 [2024-04-27 04:53:02.805929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116886 ] 00:10:33.952 [2024-04-27 04:53:03.599113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.952 [2024-04-27 04:53:03.697453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:33.952 [2024-04-27 04:53:03.697960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.899 00:10:34.899 INFO: shutting down applications... 00:10:34.899 04:53:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:34.899 04:53:04 -- common/autotest_common.sh@852 -- # return 0 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 116886 ]] 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 116886 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116886 00:10:34.899 04:53:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:10:35.156 04:53:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:10:35.156 04:53:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:35.156 04:53:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116886 00:10:35.156 04:53:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:10:35.724 04:53:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:10:35.724 04:53:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:35.724 04:53:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116886 00:10:35.724 04:53:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:10:36.290 SPDK target shutdown done 00:10:36.290 Success 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 116886 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:10:36.290 04:53:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:10:36.290 00:10:36.290 real 0m3.412s 00:10:36.290 user 0m2.768s 00:10:36.290 sys 0m0.898s 00:10:36.290 04:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.290 ************************************ 00:10:36.290 END TEST json_config_extra_key 00:10:36.290 ************************************ 00:10:36.290 04:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.290 04:53:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:36.290 04:53:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:36.290 04:53:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.290 04:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.290 ************************************ 00:10:36.290 START TEST alias_rpc 00:10:36.290 ************************************ 00:10:36.290 04:53:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:36.290 * Looking for test storage... 00:10:36.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:36.290 04:53:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:36.290 04:53:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=116981 00:10:36.290 04:53:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 116981 00:10:36.290 04:53:06 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:36.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.290 04:53:06 -- common/autotest_common.sh@819 -- # '[' -z 116981 ']' 00:10:36.290 04:53:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.290 04:53:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:36.290 04:53:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.290 04:53:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:36.290 04:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:36.549 [2024-04-27 04:53:06.259835] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.550 [2024-04-27 04:53:06.260359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116981 ] 00:10:36.550 [2024-04-27 04:53:06.429882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.809 [2024-04-27 04:53:06.578024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:36.809 [2024-04-27 04:53:06.578561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.375 04:53:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:37.375 04:53:07 -- common/autotest_common.sh@852 -- # return 0 00:10:37.375 04:53:07 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:37.941 04:53:07 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 116981 00:10:37.941 04:53:07 -- common/autotest_common.sh@926 -- # '[' -z 116981 ']' 00:10:37.941 04:53:07 -- common/autotest_common.sh@930 -- # kill -0 116981 00:10:37.941 04:53:07 -- common/autotest_common.sh@931 -- # uname 00:10:37.941 04:53:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:37.941 04:53:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116981 00:10:37.941 killing process with pid 116981 00:10:37.941 04:53:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:37.941 04:53:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:37.941 04:53:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116981' 00:10:37.941 04:53:07 -- common/autotest_common.sh@945 -- # kill 116981 00:10:37.941 04:53:07 -- common/autotest_common.sh@950 -- # wait 116981 00:10:38.878 ************************************ 00:10:38.878 END TEST alias_rpc 00:10:38.878 ************************************ 00:10:38.878 00:10:38.878 real 0m2.384s 00:10:38.878 user 0m2.411s 00:10:38.878 sys 0m0.702s 00:10:38.878 04:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.878 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.878 04:53:08 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:10:38.878 04:53:08 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:38.878 04:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:38.878 04:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:38.878 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.878 ************************************ 00:10:38.878 START TEST spdkcli_tcp 00:10:38.878 ************************************ 00:10:38.878 04:53:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:38.878 * Looking for test storage... 00:10:38.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:38.878 04:53:08 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:38.878 04:53:08 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:38.878 04:53:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:38.878 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=117080 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@27 -- # waitforlisten 117080 00:10:38.878 04:53:08 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:38.878 04:53:08 -- common/autotest_common.sh@819 -- # '[' -z 117080 ']' 00:10:38.878 04:53:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.878 04:53:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:38.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.878 04:53:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.878 04:53:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:38.878 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:10:38.878 [2024-04-27 04:53:08.698152] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:38.878 [2024-04-27 04:53:08.698460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117080 ] 00:10:39.137 [2024-04-27 04:53:08.876177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.137 [2024-04-27 04:53:09.012568] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:39.137 [2024-04-27 04:53:09.013272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.137 [2024-04-27 04:53:09.013279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.073 04:53:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:40.073 04:53:09 -- common/autotest_common.sh@852 -- # return 0 00:10:40.073 04:53:09 -- spdkcli/tcp.sh@31 -- # socat_pid=117100 00:10:40.073 04:53:09 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:40.073 04:53:09 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:40.073 [ 00:10:40.073 "spdk_get_version", 00:10:40.073 "rpc_get_methods", 00:10:40.073 "trace_get_info", 00:10:40.073 "trace_get_tpoint_group_mask", 00:10:40.073 "trace_disable_tpoint_group", 00:10:40.073 "trace_enable_tpoint_group", 00:10:40.073 "trace_clear_tpoint_mask", 00:10:40.073 "trace_set_tpoint_mask", 00:10:40.073 "framework_get_pci_devices", 00:10:40.073 "framework_get_config", 00:10:40.073 "framework_get_subsystems", 00:10:40.073 "iobuf_get_stats", 00:10:40.073 "iobuf_set_options", 00:10:40.073 "sock_set_default_impl", 00:10:40.073 "sock_impl_set_options", 00:10:40.073 "sock_impl_get_options", 00:10:40.073 "vmd_rescan", 00:10:40.073 "vmd_remove_device", 00:10:40.073 "vmd_enable", 00:10:40.073 "accel_get_stats", 00:10:40.073 "accel_set_options", 00:10:40.073 "accel_set_driver", 00:10:40.073 "accel_crypto_key_destroy", 00:10:40.073 "accel_crypto_keys_get", 00:10:40.073 "accel_crypto_key_create", 00:10:40.073 "accel_assign_opc", 00:10:40.073 "accel_get_module_info", 00:10:40.073 "accel_get_opc_assignments", 00:10:40.073 "notify_get_notifications", 00:10:40.073 "notify_get_types", 00:10:40.073 "bdev_get_histogram", 00:10:40.073 "bdev_enable_histogram", 00:10:40.073 "bdev_set_qos_limit", 00:10:40.073 "bdev_set_qd_sampling_period", 00:10:40.073 "bdev_get_bdevs", 00:10:40.073 "bdev_reset_iostat", 00:10:40.073 "bdev_get_iostat", 00:10:40.073 "bdev_examine", 00:10:40.073 "bdev_wait_for_examine", 00:10:40.073 "bdev_set_options", 00:10:40.073 "scsi_get_devices", 00:10:40.073 "thread_set_cpumask", 00:10:40.073 "framework_get_scheduler", 00:10:40.073 "framework_set_scheduler", 00:10:40.073 "framework_get_reactors", 00:10:40.073 "thread_get_io_channels", 00:10:40.073 "thread_get_pollers", 00:10:40.073 "thread_get_stats", 00:10:40.073 "framework_monitor_context_switch", 00:10:40.073 "spdk_kill_instance", 00:10:40.073 "log_enable_timestamps", 00:10:40.073 "log_get_flags", 00:10:40.073 "log_clear_flag", 00:10:40.073 "log_set_flag", 00:10:40.073 "log_get_level", 00:10:40.073 "log_set_level", 00:10:40.073 "log_get_print_level", 00:10:40.073 "log_set_print_level", 00:10:40.073 "framework_enable_cpumask_locks", 00:10:40.073 "framework_disable_cpumask_locks", 00:10:40.073 "framework_wait_init", 00:10:40.073 "framework_start_init", 00:10:40.073 "virtio_blk_create_transport", 00:10:40.073 "virtio_blk_get_transports", 00:10:40.073 "vhost_controller_set_coalescing", 00:10:40.073 "vhost_get_controllers", 00:10:40.073 "vhost_delete_controller", 00:10:40.073 "vhost_create_blk_controller", 00:10:40.073 "vhost_scsi_controller_remove_target", 00:10:40.073 "vhost_scsi_controller_add_target", 00:10:40.073 "vhost_start_scsi_controller", 00:10:40.073 "vhost_create_scsi_controller", 00:10:40.073 "nbd_get_disks", 00:10:40.073 "nbd_stop_disk", 00:10:40.073 "nbd_start_disk", 00:10:40.073 "env_dpdk_get_mem_stats", 00:10:40.073 "nvmf_subsystem_get_listeners", 00:10:40.073 "nvmf_subsystem_get_qpairs", 00:10:40.073 "nvmf_subsystem_get_controllers", 00:10:40.073 "nvmf_get_stats", 00:10:40.073 "nvmf_get_transports", 00:10:40.073 "nvmf_create_transport", 00:10:40.073 "nvmf_get_targets", 00:10:40.073 "nvmf_delete_target", 00:10:40.073 "nvmf_create_target", 00:10:40.073 "nvmf_subsystem_allow_any_host", 00:10:40.073 "nvmf_subsystem_remove_host", 00:10:40.073 "nvmf_subsystem_add_host", 00:10:40.073 "nvmf_subsystem_remove_ns", 00:10:40.073 "nvmf_subsystem_add_ns", 00:10:40.073 "nvmf_subsystem_listener_set_ana_state", 00:10:40.073 "nvmf_discovery_get_referrals", 00:10:40.073 "nvmf_discovery_remove_referral", 00:10:40.073 "nvmf_discovery_add_referral", 00:10:40.073 "nvmf_subsystem_remove_listener", 00:10:40.073 "nvmf_subsystem_add_listener", 00:10:40.073 "nvmf_delete_subsystem", 00:10:40.073 "nvmf_create_subsystem", 00:10:40.073 "nvmf_get_subsystems", 00:10:40.073 "nvmf_set_crdt", 00:10:40.073 "nvmf_set_config", 00:10:40.073 "nvmf_set_max_subsystems", 00:10:40.073 "iscsi_set_options", 00:10:40.073 "iscsi_get_auth_groups", 00:10:40.073 "iscsi_auth_group_remove_secret", 00:10:40.073 "iscsi_auth_group_add_secret", 00:10:40.073 "iscsi_delete_auth_group", 00:10:40.073 "iscsi_create_auth_group", 00:10:40.073 "iscsi_set_discovery_auth", 00:10:40.073 "iscsi_get_options", 00:10:40.073 "iscsi_target_node_request_logout", 00:10:40.073 "iscsi_target_node_set_redirect", 00:10:40.073 "iscsi_target_node_set_auth", 00:10:40.073 "iscsi_target_node_add_lun", 00:10:40.073 "iscsi_get_connections", 00:10:40.073 "iscsi_portal_group_set_auth", 00:10:40.073 "iscsi_start_portal_group", 00:10:40.073 "iscsi_delete_portal_group", 00:10:40.073 "iscsi_create_portal_group", 00:10:40.073 "iscsi_get_portal_groups", 00:10:40.073 "iscsi_delete_target_node", 00:10:40.073 "iscsi_target_node_remove_pg_ig_maps", 00:10:40.073 "iscsi_target_node_add_pg_ig_maps", 00:10:40.073 "iscsi_create_target_node", 00:10:40.073 "iscsi_get_target_nodes", 00:10:40.073 "iscsi_delete_initiator_group", 00:10:40.073 "iscsi_initiator_group_remove_initiators", 00:10:40.073 "iscsi_initiator_group_add_initiators", 00:10:40.073 "iscsi_create_initiator_group", 00:10:40.073 "iscsi_get_initiator_groups", 00:10:40.073 "iaa_scan_accel_module", 00:10:40.073 "dsa_scan_accel_module", 00:10:40.074 "ioat_scan_accel_module", 00:10:40.074 "accel_error_inject_error", 00:10:40.074 "bdev_iscsi_delete", 00:10:40.074 "bdev_iscsi_create", 00:10:40.074 "bdev_iscsi_set_options", 00:10:40.074 "bdev_virtio_attach_controller", 00:10:40.074 "bdev_virtio_scsi_get_devices", 00:10:40.074 "bdev_virtio_detach_controller", 00:10:40.074 "bdev_virtio_blk_set_hotplug", 00:10:40.074 "bdev_ftl_set_property", 00:10:40.074 "bdev_ftl_get_properties", 00:10:40.074 "bdev_ftl_get_stats", 00:10:40.074 "bdev_ftl_unmap", 00:10:40.074 "bdev_ftl_unload", 00:10:40.074 "bdev_ftl_delete", 00:10:40.074 "bdev_ftl_load", 00:10:40.074 "bdev_ftl_create", 00:10:40.074 "bdev_aio_delete", 00:10:40.074 "bdev_aio_rescan", 00:10:40.074 "bdev_aio_create", 00:10:40.074 "blobfs_create", 00:10:40.074 "blobfs_detect", 00:10:40.074 "blobfs_set_cache_size", 00:10:40.074 "bdev_zone_block_delete", 00:10:40.074 "bdev_zone_block_create", 00:10:40.074 "bdev_delay_delete", 00:10:40.074 "bdev_delay_create", 00:10:40.074 "bdev_delay_update_latency", 00:10:40.074 "bdev_split_delete", 00:10:40.074 "bdev_split_create", 00:10:40.074 "bdev_error_inject_error", 00:10:40.074 "bdev_error_delete", 00:10:40.074 "bdev_error_create", 00:10:40.074 "bdev_raid_set_options", 00:10:40.074 "bdev_raid_remove_base_bdev", 00:10:40.074 "bdev_raid_add_base_bdev", 00:10:40.074 "bdev_raid_delete", 00:10:40.074 "bdev_raid_create", 00:10:40.074 "bdev_raid_get_bdevs", 00:10:40.074 "bdev_lvol_grow_lvstore", 00:10:40.074 "bdev_lvol_get_lvols", 00:10:40.074 "bdev_lvol_get_lvstores", 00:10:40.074 "bdev_lvol_delete", 00:10:40.074 "bdev_lvol_set_read_only", 00:10:40.074 "bdev_lvol_resize", 00:10:40.074 "bdev_lvol_decouple_parent", 00:10:40.074 "bdev_lvol_inflate", 00:10:40.074 "bdev_lvol_rename", 00:10:40.074 "bdev_lvol_clone_bdev", 00:10:40.074 "bdev_lvol_clone", 00:10:40.074 "bdev_lvol_snapshot", 00:10:40.074 "bdev_lvol_create", 00:10:40.074 "bdev_lvol_delete_lvstore", 00:10:40.074 "bdev_lvol_rename_lvstore", 00:10:40.074 "bdev_lvol_create_lvstore", 00:10:40.074 "bdev_passthru_delete", 00:10:40.074 "bdev_passthru_create", 00:10:40.074 "bdev_nvme_cuse_unregister", 00:10:40.074 "bdev_nvme_cuse_register", 00:10:40.074 "bdev_opal_new_user", 00:10:40.074 "bdev_opal_set_lock_state", 00:10:40.074 "bdev_opal_delete", 00:10:40.074 "bdev_opal_get_info", 00:10:40.074 "bdev_opal_create", 00:10:40.074 "bdev_nvme_opal_revert", 00:10:40.074 "bdev_nvme_opal_init", 00:10:40.074 "bdev_nvme_send_cmd", 00:10:40.074 "bdev_nvme_get_path_iostat", 00:10:40.074 "bdev_nvme_get_mdns_discovery_info", 00:10:40.074 "bdev_nvme_stop_mdns_discovery", 00:10:40.074 "bdev_nvme_start_mdns_discovery", 00:10:40.074 "bdev_nvme_set_multipath_policy", 00:10:40.074 "bdev_nvme_set_preferred_path", 00:10:40.074 "bdev_nvme_get_io_paths", 00:10:40.074 "bdev_nvme_remove_error_injection", 00:10:40.074 "bdev_nvme_add_error_injection", 00:10:40.074 "bdev_nvme_get_discovery_info", 00:10:40.074 "bdev_nvme_stop_discovery", 00:10:40.074 "bdev_nvme_start_discovery", 00:10:40.074 "bdev_nvme_get_controller_health_info", 00:10:40.074 "bdev_nvme_disable_controller", 00:10:40.074 "bdev_nvme_enable_controller", 00:10:40.074 "bdev_nvme_reset_controller", 00:10:40.074 "bdev_nvme_get_transport_statistics", 00:10:40.074 "bdev_nvme_apply_firmware", 00:10:40.074 "bdev_nvme_detach_controller", 00:10:40.074 "bdev_nvme_get_controllers", 00:10:40.074 "bdev_nvme_attach_controller", 00:10:40.074 "bdev_nvme_set_hotplug", 00:10:40.074 "bdev_nvme_set_options", 00:10:40.074 "bdev_null_resize", 00:10:40.074 "bdev_null_delete", 00:10:40.074 "bdev_null_create", 00:10:40.074 "bdev_malloc_delete", 00:10:40.074 "bdev_malloc_create" 00:10:40.074 ] 00:10:40.074 04:53:09 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:40.074 04:53:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:40.074 04:53:09 -- common/autotest_common.sh@10 -- # set +x 00:10:40.334 04:53:09 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:40.334 04:53:09 -- spdkcli/tcp.sh@38 -- # killprocess 117080 00:10:40.334 04:53:09 -- common/autotest_common.sh@926 -- # '[' -z 117080 ']' 00:10:40.334 04:53:09 -- common/autotest_common.sh@930 -- # kill -0 117080 00:10:40.334 04:53:09 -- common/autotest_common.sh@931 -- # uname 00:10:40.334 04:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:40.334 04:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117080 00:10:40.334 04:53:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:40.334 04:53:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:40.334 04:53:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117080' 00:10:40.334 killing process with pid 117080 00:10:40.334 04:53:10 -- common/autotest_common.sh@945 -- # kill 117080 00:10:40.334 04:53:10 -- common/autotest_common.sh@950 -- # wait 117080 00:10:40.902 ************************************ 00:10:40.902 END TEST spdkcli_tcp 00:10:40.902 ************************************ 00:10:40.902 00:10:40.902 real 0m2.222s 00:10:40.902 user 0m3.886s 00:10:40.902 sys 0m0.690s 00:10:40.902 04:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.902 04:53:10 -- common/autotest_common.sh@10 -- # set +x 00:10:41.161 04:53:10 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:41.161 04:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:41.161 04:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:41.161 04:53:10 -- common/autotest_common.sh@10 -- # set +x 00:10:41.161 ************************************ 00:10:41.161 START TEST dpdk_mem_utility 00:10:41.161 ************************************ 00:10:41.161 04:53:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:41.161 * Looking for test storage... 00:10:41.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:41.161 04:53:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:41.162 04:53:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=117178 00:10:41.162 04:53:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 117178 00:10:41.162 04:53:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.162 04:53:10 -- common/autotest_common.sh@819 -- # '[' -z 117178 ']' 00:10:41.162 04:53:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.162 04:53:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:41.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.162 04:53:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.162 04:53:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:41.162 04:53:10 -- common/autotest_common.sh@10 -- # set +x 00:10:41.162 [2024-04-27 04:53:10.972293] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:41.162 [2024-04-27 04:53:10.972633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117178 ] 00:10:41.421 [2024-04-27 04:53:11.145423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.421 [2024-04-27 04:53:11.256675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:41.421 [2024-04-27 04:53:11.256939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.359 04:53:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:42.359 04:53:11 -- common/autotest_common.sh@852 -- # return 0 00:10:42.359 04:53:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:42.359 04:53:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:42.359 04:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:42.359 04:53:11 -- common/autotest_common.sh@10 -- # set +x 00:10:42.359 { 00:10:42.359 "filename": "/tmp/spdk_mem_dump.txt" 00:10:42.359 } 00:10:42.359 04:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:42.359 04:53:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:42.359 DPDK memory size 814.000000 MiB in 1 heap(s) 00:10:42.360 1 heaps totaling size 814.000000 MiB 00:10:42.360 size: 814.000000 MiB heap id: 0 00:10:42.360 end heaps---------- 00:10:42.360 8 mempools totaling size 598.116089 MiB 00:10:42.360 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:42.360 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:42.360 size: 84.521057 MiB name: bdev_io_117178 00:10:42.360 size: 51.011292 MiB name: evtpool_117178 00:10:42.360 size: 50.003479 MiB name: msgpool_117178 00:10:42.360 size: 21.763794 MiB name: PDU_Pool 00:10:42.360 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:42.360 size: 0.026123 MiB name: Session_Pool 00:10:42.360 end mempools------- 00:10:42.360 6 memzones totaling size 4.142822 MiB 00:10:42.360 size: 1.000366 MiB name: RG_ring_0_117178 00:10:42.360 size: 1.000366 MiB name: RG_ring_1_117178 00:10:42.360 size: 1.000366 MiB name: RG_ring_4_117178 00:10:42.360 size: 1.000366 MiB name: RG_ring_5_117178 00:10:42.360 size: 0.125366 MiB name: RG_ring_2_117178 00:10:42.360 size: 0.015991 MiB name: RG_ring_3_117178 00:10:42.360 end memzones------- 00:10:42.360 04:53:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:42.360 heap id: 0 total size: 814.000000 MiB number of busy elements: 219 number of free elements: 15 00:10:42.360 list of free elements. size: 12.486755 MiB 00:10:42.360 element at address: 0x200000400000 with size: 1.999512 MiB 00:10:42.360 element at address: 0x200018e00000 with size: 0.999878 MiB 00:10:42.360 element at address: 0x200019000000 with size: 0.999878 MiB 00:10:42.360 element at address: 0x200003e00000 with size: 0.996277 MiB 00:10:42.360 element at address: 0x200031c00000 with size: 0.994446 MiB 00:10:42.360 element at address: 0x200013800000 with size: 0.978699 MiB 00:10:42.360 element at address: 0x200007000000 with size: 0.959839 MiB 00:10:42.360 element at address: 0x200019200000 with size: 0.936584 MiB 00:10:42.360 element at address: 0x200000200000 with size: 0.837219 MiB 00:10:42.360 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:10:42.360 element at address: 0x20000b200000 with size: 0.489624 MiB 00:10:42.360 element at address: 0x200000800000 with size: 0.486511 MiB 00:10:42.360 element at address: 0x200019400000 with size: 0.485657 MiB 00:10:42.360 element at address: 0x200027e00000 with size: 0.402527 MiB 00:10:42.360 element at address: 0x200003a00000 with size: 0.351685 MiB 00:10:42.360 list of standard malloc elements. size: 199.250671 MiB 00:10:42.360 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:10:42.360 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:10:42.360 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:42.360 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:10:42.360 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:42.360 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:42.360 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:10:42.360 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:42.360 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:10:42.360 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087c980 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003adb300 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003adb500 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003affa80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003affb40 with size: 0.000183 MiB 00:10:42.360 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:10:42.360 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:10:42.361 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e670c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e67180 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6dd80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:10:42.361 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:10:42.362 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:10:42.362 list of memzone associated elements. size: 602.262573 MiB 00:10:42.362 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:10:42.362 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:42.362 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:10:42.362 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:42.362 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:10:42.362 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_117178_0 00:10:42.362 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:10:42.362 associated memzone info: size: 48.002930 MiB name: MP_evtpool_117178_0 00:10:42.362 element at address: 0x200003fff380 with size: 48.003052 MiB 00:10:42.362 associated memzone info: size: 48.002930 MiB name: MP_msgpool_117178_0 00:10:42.362 element at address: 0x2000195be940 with size: 20.255554 MiB 00:10:42.362 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:42.362 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:10:42.362 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:42.362 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:10:42.362 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_117178 00:10:42.362 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:10:42.362 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_117178 00:10:42.362 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:42.362 associated memzone info: size: 1.007996 MiB name: MP_evtpool_117178 00:10:42.362 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:10:42.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:42.362 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:10:42.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:42.362 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:10:42.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:42.362 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:10:42.362 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:42.362 element at address: 0x200003eff180 with size: 1.000488 MiB 00:10:42.362 associated memzone info: size: 1.000366 MiB name: RG_ring_0_117178 00:10:42.362 element at address: 0x200003affc00 with size: 1.000488 MiB 00:10:42.362 associated memzone info: size: 1.000366 MiB name: RG_ring_1_117178 00:10:42.362 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:10:42.362 associated memzone info: size: 1.000366 MiB name: RG_ring_4_117178 00:10:42.362 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:10:42.362 associated memzone info: size: 1.000366 MiB name: RG_ring_5_117178 00:10:42.362 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:10:42.362 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_117178 00:10:42.362 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:10:42.362 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:42.362 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:10:42.362 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:42.362 element at address: 0x20001947c540 with size: 0.250488 MiB 00:10:42.362 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:42.362 element at address: 0x200003adf880 with size: 0.125488 MiB 00:10:42.362 associated memzone info: size: 0.125366 MiB name: RG_ring_2_117178 00:10:42.362 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:10:42.362 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:42.362 element at address: 0x200027e67240 with size: 0.023743 MiB 00:10:42.362 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:42.362 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:10:42.362 associated memzone info: size: 0.015991 MiB name: RG_ring_3_117178 00:10:42.362 element at address: 0x200027e6d380 with size: 0.002441 MiB 00:10:42.362 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:42.362 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:10:42.362 associated memzone info: size: 0.000183 MiB name: MP_msgpool_117178 00:10:42.362 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:10:42.362 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_117178 00:10:42.362 element at address: 0x200027e6de40 with size: 0.000305 MiB 00:10:42.362 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:42.362 04:53:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:42.362 04:53:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 117178 00:10:42.362 04:53:12 -- common/autotest_common.sh@926 -- # '[' -z 117178 ']' 00:10:42.362 04:53:12 -- common/autotest_common.sh@930 -- # kill -0 117178 00:10:42.362 04:53:12 -- common/autotest_common.sh@931 -- # uname 00:10:42.362 04:53:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:42.362 04:53:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117178 00:10:42.362 04:53:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:42.362 killing process with pid 117178 00:10:42.362 04:53:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:42.362 04:53:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117178' 00:10:42.362 04:53:12 -- common/autotest_common.sh@945 -- # kill 117178 00:10:42.362 04:53:12 -- common/autotest_common.sh@950 -- # wait 117178 00:10:43.298 00:10:43.298 real 0m2.078s 00:10:43.298 user 0m2.000s 00:10:43.298 sys 0m0.684s 00:10:43.298 04:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.298 04:53:12 -- common/autotest_common.sh@10 -- # set +x 00:10:43.298 ************************************ 00:10:43.298 END TEST dpdk_mem_utility 00:10:43.298 ************************************ 00:10:43.298 04:53:12 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:43.298 04:53:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:43.298 04:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.298 04:53:12 -- common/autotest_common.sh@10 -- # set +x 00:10:43.298 ************************************ 00:10:43.298 START TEST event 00:10:43.298 ************************************ 00:10:43.298 04:53:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:43.298 * Looking for test storage... 00:10:43.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:43.298 04:53:13 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:43.298 04:53:13 -- bdev/nbd_common.sh@6 -- # set -e 00:10:43.298 04:53:13 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:43.298 04:53:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:43.298 04:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.298 04:53:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.298 ************************************ 00:10:43.298 START TEST event_perf 00:10:43.298 ************************************ 00:10:43.298 04:53:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:43.298 Running I/O for 1 seconds...[2024-04-27 04:53:13.069802] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:43.298 [2024-04-27 04:53:13.070794] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117269 ] 00:10:43.556 [2024-04-27 04:53:13.265709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.556 [2024-04-27 04:53:13.374022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.556 [2024-04-27 04:53:13.374255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.556 [2024-04-27 04:53:13.374261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.556 Running I/O for 1 seconds...[2024-04-27 04:53:13.375455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.931 00:10:44.931 lcore 0: 188918 00:10:44.931 lcore 1: 188920 00:10:44.931 lcore 2: 188913 00:10:44.931 lcore 3: 188915 00:10:44.931 done. 00:10:44.931 00:10:44.931 real 0m1.486s 00:10:44.931 user 0m4.212s 00:10:44.931 sys 0m0.157s 00:10:44.931 04:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.931 04:53:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.931 ************************************ 00:10:44.931 END TEST event_perf 00:10:44.931 ************************************ 00:10:44.931 04:53:14 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:44.931 04:53:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:44.931 04:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.931 04:53:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.931 ************************************ 00:10:44.931 START TEST event_reactor 00:10:44.931 ************************************ 00:10:44.931 04:53:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:44.931 [2024-04-27 04:53:14.610265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:44.931 [2024-04-27 04:53:14.610463] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117310 ] 00:10:44.931 [2024-04-27 04:53:14.771393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.189 [2024-04-27 04:53:14.889477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.563 test_start 00:10:46.563 oneshot 00:10:46.563 tick 100 00:10:46.563 tick 100 00:10:46.563 tick 250 00:10:46.563 tick 100 00:10:46.563 tick 100 00:10:46.563 tick 100 00:10:46.563 tick 250 00:10:46.563 tick 500 00:10:46.563 tick 100 00:10:46.563 tick 100 00:10:46.563 tick 250 00:10:46.563 tick 100 00:10:46.563 tick 100 00:10:46.563 test_end 00:10:46.563 00:10:46.563 real 0m1.447s 00:10:46.563 user 0m1.239s 00:10:46.563 sys 0m0.108s 00:10:46.563 04:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.563 04:53:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.564 ************************************ 00:10:46.564 END TEST event_reactor 00:10:46.564 ************************************ 00:10:46.564 04:53:16 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:46.564 04:53:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:46.564 04:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.564 04:53:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.564 ************************************ 00:10:46.564 START TEST event_reactor_perf 00:10:46.564 ************************************ 00:10:46.564 04:53:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:46.564 [2024-04-27 04:53:16.116371] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:46.564 [2024-04-27 04:53:16.116744] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117361 ] 00:10:46.564 [2024-04-27 04:53:16.294832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.564 [2024-04-27 04:53:16.413763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.936 test_start 00:10:47.936 test_end 00:10:47.936 Performance: 305718 events per second 00:10:47.936 00:10:47.936 real 0m1.483s 00:10:47.936 user 0m1.231s 00:10:47.936 sys 0m0.152s 00:10:47.936 04:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.936 04:53:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.936 ************************************ 00:10:47.936 END TEST event_reactor_perf 00:10:47.936 ************************************ 00:10:47.936 04:53:17 -- event/event.sh@49 -- # uname -s 00:10:47.936 04:53:17 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:47.936 04:53:17 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:47.936 04:53:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:47.936 04:53:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.936 04:53:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.936 ************************************ 00:10:47.936 START TEST event_scheduler 00:10:47.936 ************************************ 00:10:47.936 04:53:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:47.936 * Looking for test storage... 00:10:47.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:47.936 04:53:17 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:47.936 04:53:17 -- scheduler/scheduler.sh@35 -- # scheduler_pid=117424 00:10:47.936 04:53:17 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:47.936 04:53:17 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:47.936 04:53:17 -- scheduler/scheduler.sh@37 -- # waitforlisten 117424 00:10:47.936 04:53:17 -- common/autotest_common.sh@819 -- # '[' -z 117424 ']' 00:10:47.936 04:53:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.936 04:53:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:47.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.937 04:53:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.937 04:53:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:47.937 04:53:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.937 [2024-04-27 04:53:17.801669] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:47.937 [2024-04-27 04:53:17.801912] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117424 ] 00:10:48.195 [2024-04-27 04:53:18.002742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.453 [2024-04-27 04:53:18.132812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.453 [2024-04-27 04:53:18.132961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.453 [2024-04-27 04:53:18.134069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.453 [2024-04-27 04:53:18.134076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.019 04:53:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:49.019 04:53:18 -- common/autotest_common.sh@852 -- # return 0 00:10:49.019 04:53:18 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:49.019 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.019 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.019 POWER: Env isn't set yet! 00:10:49.019 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:49.019 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:49.019 POWER: Cannot set governor of lcore 0 to userspace 00:10:49.019 POWER: Attempting to initialise PSTAT power management... 00:10:49.019 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:49.019 POWER: Cannot set governor of lcore 0 to performance 00:10:49.019 POWER: Attempting to initialise AMD PSTATE power management... 00:10:49.019 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:49.019 POWER: Cannot set governor of lcore 0 to userspace 00:10:49.019 POWER: Attempting to initialise CPPC power management... 00:10:49.019 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:49.019 POWER: Cannot set governor of lcore 0 to userspace 00:10:49.019 POWER: Attempting to initialise VM power management... 00:10:49.019 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:49.019 POWER: Unable to set Power Management Environment for lcore 0 00:10:49.019 [2024-04-27 04:53:18.793210] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:49.019 [2024-04-27 04:53:18.793542] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:49.019 [2024-04-27 04:53:18.793776] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:49.019 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.019 04:53:18 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:49.019 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.019 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 [2024-04-27 04:53:18.942109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:49.277 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:49.277 04:53:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:49.277 04:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 ************************************ 00:10:49.277 START TEST scheduler_create_thread 00:10:49.277 ************************************ 00:10:49.277 04:53:18 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:49.277 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 2 00:10:49.277 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:49.277 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 3 00:10:49.277 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:49.277 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 4 00:10:49.277 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:49.277 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 5 00:10:49.277 04:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:18 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:49.277 04:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 6 00:10:49.277 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:19 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:49.277 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 7 00:10:49.277 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:19 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:49.277 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 8 00:10:49.277 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:19 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:49.277 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.277 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 9 00:10:49.277 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.277 04:53:19 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:49.277 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.278 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.278 10 00:10:49.278 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.278 04:53:19 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:49.278 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.278 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.278 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.278 04:53:19 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:49.278 04:53:19 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:49.278 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.278 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.278 04:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.278 04:53:19 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:49.278 04:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.278 04:53:19 -- common/autotest_common.sh@10 -- # set +x 00:10:50.651 04:53:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:50.651 04:53:20 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:50.651 04:53:20 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:50.651 04:53:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:50.651 04:53:20 -- common/autotest_common.sh@10 -- # set +x 00:10:52.020 04:53:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:52.020 00:10:52.020 real 0m2.629s 00:10:52.020 user 0m0.025s 00:10:52.020 sys 0m0.000s 00:10:52.020 04:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.020 ************************************ 00:10:52.020 END TEST scheduler_create_thread 00:10:52.020 04:53:21 -- common/autotest_common.sh@10 -- # set +x 00:10:52.020 ************************************ 00:10:52.020 04:53:21 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:52.020 04:53:21 -- scheduler/scheduler.sh@46 -- # killprocess 117424 00:10:52.020 04:53:21 -- common/autotest_common.sh@926 -- # '[' -z 117424 ']' 00:10:52.020 04:53:21 -- common/autotest_common.sh@930 -- # kill -0 117424 00:10:52.020 04:53:21 -- common/autotest_common.sh@931 -- # uname 00:10:52.020 04:53:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:52.020 04:53:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117424 00:10:52.020 04:53:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:52.020 04:53:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:52.020 04:53:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117424' 00:10:52.020 killing process with pid 117424 00:10:52.020 04:53:21 -- common/autotest_common.sh@945 -- # kill 117424 00:10:52.020 04:53:21 -- common/autotest_common.sh@950 -- # wait 117424 00:10:52.278 [2024-04-27 04:53:22.068500] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:52.844 00:10:52.844 real 0m4.881s 00:10:52.844 user 0m8.739s 00:10:52.844 sys 0m0.491s 00:10:52.844 04:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.844 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 ************************************ 00:10:52.844 END TEST event_scheduler 00:10:52.844 ************************************ 00:10:52.844 04:53:22 -- event/event.sh@51 -- # modprobe -n nbd 00:10:52.844 04:53:22 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:52.844 04:53:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:52.844 04:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.844 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 ************************************ 00:10:52.844 START TEST app_repeat 00:10:52.844 ************************************ 00:10:52.844 04:53:22 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:10:52.844 04:53:22 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.844 04:53:22 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.844 04:53:22 -- event/event.sh@13 -- # local nbd_list 00:10:52.844 04:53:22 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:52.844 04:53:22 -- event/event.sh@14 -- # local bdev_list 00:10:52.844 04:53:22 -- event/event.sh@15 -- # local repeat_times=4 00:10:52.844 04:53:22 -- event/event.sh@17 -- # modprobe nbd 00:10:52.844 04:53:22 -- event/event.sh@19 -- # repeat_pid=117550 00:10:52.844 04:53:22 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.844 04:53:22 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:52.844 Process app_repeat pid: 117550 00:10:52.844 04:53:22 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 117550' 00:10:52.844 04:53:22 -- event/event.sh@23 -- # for i in {0..2} 00:10:52.844 spdk_app_start Round 0 00:10:52.844 04:53:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:52.844 04:53:22 -- event/event.sh@25 -- # waitforlisten 117550 /var/tmp/spdk-nbd.sock 00:10:52.844 04:53:22 -- common/autotest_common.sh@819 -- # '[' -z 117550 ']' 00:10:52.844 04:53:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.844 04:53:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:52.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.844 04:53:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.844 04:53:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:52.844 04:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.844 [2024-04-27 04:53:22.614884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:52.844 [2024-04-27 04:53:22.615243] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117550 ] 00:10:53.103 [2024-04-27 04:53:22.797757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.103 [2024-04-27 04:53:22.918092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.103 [2024-04-27 04:53:22.918107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.036 04:53:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.036 04:53:23 -- common/autotest_common.sh@852 -- # return 0 00:10:54.036 04:53:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:54.295 Malloc0 00:10:54.295 04:53:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:54.553 Malloc1 00:10:54.553 04:53:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@12 -- # local i 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.553 04:53:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:54.812 /dev/nbd0 00:10:54.812 04:53:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:54.812 04:53:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:54.812 04:53:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:54.812 04:53:24 -- common/autotest_common.sh@857 -- # local i 00:10:54.812 04:53:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:54.812 04:53:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:54.812 04:53:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:54.812 04:53:24 -- common/autotest_common.sh@861 -- # break 00:10:54.812 04:53:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:54.812 04:53:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:54.812 04:53:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:54.812 1+0 records in 00:10:54.812 1+0 records out 00:10:54.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284234 s, 14.4 MB/s 00:10:54.812 04:53:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.812 04:53:24 -- common/autotest_common.sh@874 -- # size=4096 00:10:54.812 04:53:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:54.812 04:53:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:54.812 04:53:24 -- common/autotest_common.sh@877 -- # return 0 00:10:54.812 04:53:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.812 04:53:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.812 04:53:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:55.409 /dev/nbd1 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:55.409 04:53:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:55.409 04:53:25 -- common/autotest_common.sh@857 -- # local i 00:10:55.409 04:53:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.409 04:53:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.409 04:53:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:55.409 04:53:25 -- common/autotest_common.sh@861 -- # break 00:10:55.409 04:53:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.409 04:53:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.409 04:53:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:55.409 1+0 records in 00:10:55.409 1+0 records out 00:10:55.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369004 s, 11.1 MB/s 00:10:55.409 04:53:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.409 04:53:25 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.409 04:53:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:55.409 04:53:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.409 04:53:25 -- common/autotest_common.sh@877 -- # return 0 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.409 04:53:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:55.699 04:53:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:55.699 { 00:10:55.699 "nbd_device": "/dev/nbd0", 00:10:55.699 "bdev_name": "Malloc0" 00:10:55.699 }, 00:10:55.699 { 00:10:55.699 "nbd_device": "/dev/nbd1", 00:10:55.699 "bdev_name": "Malloc1" 00:10:55.699 } 00:10:55.699 ]' 00:10:55.699 04:53:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:55.699 { 00:10:55.699 "nbd_device": "/dev/nbd0", 00:10:55.699 "bdev_name": "Malloc0" 00:10:55.699 }, 00:10:55.699 { 00:10:55.699 "nbd_device": "/dev/nbd1", 00:10:55.699 "bdev_name": "Malloc1" 00:10:55.699 } 00:10:55.699 ]' 00:10:55.699 04:53:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:55.699 04:53:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:55.699 /dev/nbd1' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:55.700 /dev/nbd1' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@65 -- # count=2 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@95 -- # count=2 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:55.700 256+0 records in 00:10:55.700 256+0 records out 00:10:55.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114897 s, 91.3 MB/s 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:55.700 256+0 records in 00:10:55.700 256+0 records out 00:10:55.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023044 s, 45.5 MB/s 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:55.700 256+0 records in 00:10:55.700 256+0 records out 00:10:55.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304749 s, 34.4 MB/s 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@51 -- # local i 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.700 04:53:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@41 -- # break 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.958 04:53:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@41 -- # break 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.216 04:53:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.474 04:53:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:56.474 04:53:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:56.474 04:53:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@65 -- # true 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@65 -- # count=0 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@104 -- # count=0 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:56.732 04:53:26 -- bdev/nbd_common.sh@109 -- # return 0 00:10:56.732 04:53:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:56.990 04:53:26 -- event/event.sh@35 -- # sleep 3 00:10:57.248 [2024-04-27 04:53:27.073755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:57.507 [2024-04-27 04:53:27.146938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.507 [2024-04-27 04:53:27.146946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.507 [2024-04-27 04:53:27.239099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:57.507 [2024-04-27 04:53:27.239307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:00.035 04:53:29 -- event/event.sh@23 -- # for i in {0..2} 00:11:00.035 spdk_app_start Round 1 00:11:00.035 04:53:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:00.035 04:53:29 -- event/event.sh@25 -- # waitforlisten 117550 /var/tmp/spdk-nbd.sock 00:11:00.035 04:53:29 -- common/autotest_common.sh@819 -- # '[' -z 117550 ']' 00:11:00.035 04:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:00.035 04:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:00.035 04:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:00.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:00.035 04:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:00.035 04:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:00.293 04:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:00.293 04:53:30 -- common/autotest_common.sh@852 -- # return 0 00:11:00.293 04:53:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:00.553 Malloc0 00:11:00.553 04:53:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:00.816 Malloc1 00:11:00.816 04:53:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@12 -- # local i 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.816 04:53:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:01.074 /dev/nbd0 00:11:01.074 04:53:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:01.074 04:53:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:01.074 04:53:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:01.074 04:53:30 -- common/autotest_common.sh@857 -- # local i 00:11:01.074 04:53:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:01.074 04:53:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:01.074 04:53:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:01.074 04:53:30 -- common/autotest_common.sh@861 -- # break 00:11:01.074 04:53:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:01.074 04:53:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:01.074 04:53:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:01.074 1+0 records in 00:11:01.074 1+0 records out 00:11:01.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265467 s, 15.4 MB/s 00:11:01.074 04:53:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.074 04:53:30 -- common/autotest_common.sh@874 -- # size=4096 00:11:01.074 04:53:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.074 04:53:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:01.074 04:53:30 -- common/autotest_common.sh@877 -- # return 0 00:11:01.074 04:53:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.074 04:53:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.074 04:53:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:01.642 /dev/nbd1 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:01.642 04:53:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:01.642 04:53:31 -- common/autotest_common.sh@857 -- # local i 00:11:01.642 04:53:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:01.642 04:53:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:01.642 04:53:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:01.642 04:53:31 -- common/autotest_common.sh@861 -- # break 00:11:01.642 04:53:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:01.642 04:53:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:01.642 04:53:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:01.642 1+0 records in 00:11:01.642 1+0 records out 00:11:01.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245112 s, 16.7 MB/s 00:11:01.642 04:53:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.642 04:53:31 -- common/autotest_common.sh@874 -- # size=4096 00:11:01.642 04:53:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:01.642 04:53:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:01.642 04:53:31 -- common/autotest_common.sh@877 -- # return 0 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.642 04:53:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:01.901 { 00:11:01.901 "nbd_device": "/dev/nbd0", 00:11:01.901 "bdev_name": "Malloc0" 00:11:01.901 }, 00:11:01.901 { 00:11:01.901 "nbd_device": "/dev/nbd1", 00:11:01.901 "bdev_name": "Malloc1" 00:11:01.901 } 00:11:01.901 ]' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:01.901 { 00:11:01.901 "nbd_device": "/dev/nbd0", 00:11:01.901 "bdev_name": "Malloc0" 00:11:01.901 }, 00:11:01.901 { 00:11:01.901 "nbd_device": "/dev/nbd1", 00:11:01.901 "bdev_name": "Malloc1" 00:11:01.901 } 00:11:01.901 ]' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:01.901 /dev/nbd1' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:01.901 /dev/nbd1' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@65 -- # count=2 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@95 -- # count=2 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:01.901 256+0 records in 00:11:01.901 256+0 records out 00:11:01.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00707883 s, 148 MB/s 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:01.901 256+0 records in 00:11:01.901 256+0 records out 00:11:01.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270955 s, 38.7 MB/s 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:01.901 256+0 records in 00:11:01.901 256+0 records out 00:11:01.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293378 s, 35.7 MB/s 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@51 -- # local i 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.901 04:53:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@41 -- # break 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.160 04:53:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@41 -- # break 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.418 04:53:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@65 -- # true 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@65 -- # count=0 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@104 -- # count=0 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:02.985 04:53:32 -- bdev/nbd_common.sh@109 -- # return 0 00:11:02.985 04:53:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:03.243 04:53:33 -- event/event.sh@35 -- # sleep 3 00:11:03.501 [2024-04-27 04:53:33.331436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.758 [2024-04-27 04:53:33.432511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.758 [2024-04-27 04:53:33.432517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.758 [2024-04-27 04:53:33.525888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:03.758 [2024-04-27 04:53:33.526032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:06.286 spdk_app_start Round 2 00:11:06.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:06.286 04:53:36 -- event/event.sh@23 -- # for i in {0..2} 00:11:06.286 04:53:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:06.286 04:53:36 -- event/event.sh@25 -- # waitforlisten 117550 /var/tmp/spdk-nbd.sock 00:11:06.286 04:53:36 -- common/autotest_common.sh@819 -- # '[' -z 117550 ']' 00:11:06.286 04:53:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:06.286 04:53:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:06.286 04:53:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:06.286 04:53:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:06.286 04:53:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.544 04:53:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:06.544 04:53:36 -- common/autotest_common.sh@852 -- # return 0 00:11:06.544 04:53:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:06.802 Malloc0 00:11:06.802 04:53:36 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:07.071 Malloc1 00:11:07.071 04:53:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:07.071 04:53:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@12 -- # local i 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:07.072 04:53:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:07.345 /dev/nbd0 00:11:07.345 04:53:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:07.345 04:53:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:07.345 04:53:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:07.345 04:53:37 -- common/autotest_common.sh@857 -- # local i 00:11:07.345 04:53:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.345 04:53:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.345 04:53:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:07.345 04:53:37 -- common/autotest_common.sh@861 -- # break 00:11:07.345 04:53:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.345 04:53:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.345 04:53:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:07.345 1+0 records in 00:11:07.346 1+0 records out 00:11:07.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058115 s, 7.0 MB/s 00:11:07.346 04:53:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:07.346 04:53:37 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.346 04:53:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:07.346 04:53:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.346 04:53:37 -- common/autotest_common.sh@877 -- # return 0 00:11:07.346 04:53:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.346 04:53:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:07.346 04:53:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:07.605 /dev/nbd1 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:07.605 04:53:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:07.605 04:53:37 -- common/autotest_common.sh@857 -- # local i 00:11:07.605 04:53:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:07.605 04:53:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:07.605 04:53:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:07.605 04:53:37 -- common/autotest_common.sh@861 -- # break 00:11:07.605 04:53:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:07.605 04:53:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:07.605 04:53:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:07.605 1+0 records in 00:11:07.605 1+0 records out 00:11:07.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645189 s, 6.3 MB/s 00:11:07.605 04:53:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:07.605 04:53:37 -- common/autotest_common.sh@874 -- # size=4096 00:11:07.605 04:53:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:07.605 04:53:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:07.605 04:53:37 -- common/autotest_common.sh@877 -- # return 0 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.605 04:53:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:07.864 { 00:11:07.864 "nbd_device": "/dev/nbd0", 00:11:07.864 "bdev_name": "Malloc0" 00:11:07.864 }, 00:11:07.864 { 00:11:07.864 "nbd_device": "/dev/nbd1", 00:11:07.864 "bdev_name": "Malloc1" 00:11:07.864 } 00:11:07.864 ]' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:07.864 { 00:11:07.864 "nbd_device": "/dev/nbd0", 00:11:07.864 "bdev_name": "Malloc0" 00:11:07.864 }, 00:11:07.864 { 00:11:07.864 "nbd_device": "/dev/nbd1", 00:11:07.864 "bdev_name": "Malloc1" 00:11:07.864 } 00:11:07.864 ]' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:07.864 /dev/nbd1' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:07.864 /dev/nbd1' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@65 -- # count=2 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@95 -- # count=2 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:07.864 256+0 records in 00:11:07.864 256+0 records out 00:11:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011368 s, 92.2 MB/s 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:07.864 04:53:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:08.123 256+0 records in 00:11:08.124 256+0 records out 00:11:08.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274078 s, 38.3 MB/s 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:08.124 256+0 records in 00:11:08.124 256+0 records out 00:11:08.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032735 s, 32.0 MB/s 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@51 -- # local i 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.124 04:53:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@41 -- # break 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@45 -- # return 0 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.383 04:53:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@41 -- # break 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@45 -- # return 0 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.642 04:53:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@65 -- # true 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@65 -- # count=0 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@104 -- # count=0 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:08.901 04:53:38 -- bdev/nbd_common.sh@109 -- # return 0 00:11:08.901 04:53:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:09.470 04:53:39 -- event/event.sh@35 -- # sleep 3 00:11:09.729 [2024-04-27 04:53:39.414182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.729 [2024-04-27 04:53:39.514637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.729 [2024-04-27 04:53:39.514664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.729 [2024-04-27 04:53:39.603230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:09.729 [2024-04-27 04:53:39.603402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:12.262 04:53:42 -- event/event.sh@38 -- # waitforlisten 117550 /var/tmp/spdk-nbd.sock 00:11:12.262 04:53:42 -- common/autotest_common.sh@819 -- # '[' -z 117550 ']' 00:11:12.262 04:53:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:12.262 04:53:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:12.262 04:53:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:12.262 04:53:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:12.262 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:12.521 04:53:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:12.521 04:53:42 -- common/autotest_common.sh@852 -- # return 0 00:11:12.521 04:53:42 -- event/event.sh@39 -- # killprocess 117550 00:11:12.521 04:53:42 -- common/autotest_common.sh@926 -- # '[' -z 117550 ']' 00:11:12.521 04:53:42 -- common/autotest_common.sh@930 -- # kill -0 117550 00:11:12.521 04:53:42 -- common/autotest_common.sh@931 -- # uname 00:11:12.521 04:53:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:12.521 04:53:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117550 00:11:12.521 killing process with pid 117550 00:11:12.521 04:53:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:12.521 04:53:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:12.521 04:53:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117550' 00:11:12.521 04:53:42 -- common/autotest_common.sh@945 -- # kill 117550 00:11:12.521 04:53:42 -- common/autotest_common.sh@950 -- # wait 117550 00:11:13.088 spdk_app_start is called in Round 0. 00:11:13.088 Shutdown signal received, stop current app iteration 00:11:13.088 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:11:13.088 spdk_app_start is called in Round 1. 00:11:13.088 Shutdown signal received, stop current app iteration 00:11:13.088 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:11:13.088 spdk_app_start is called in Round 2. 00:11:13.088 Shutdown signal received, stop current app iteration 00:11:13.088 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:11:13.088 spdk_app_start is called in Round 3. 00:11:13.088 Shutdown signal received, stop current app iteration 00:11:13.088 ************************************ 00:11:13.088 END TEST app_repeat 00:11:13.088 ************************************ 00:11:13.088 04:53:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:13.088 04:53:42 -- event/event.sh@42 -- # return 0 00:11:13.088 00:11:13.088 real 0m20.197s 00:11:13.088 user 0m45.199s 00:11:13.088 sys 0m3.424s 00:11:13.088 04:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.088 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:13.088 04:53:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:13.088 04:53:42 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:13.088 04:53:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:13.089 04:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.089 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:13.089 ************************************ 00:11:13.089 START TEST cpu_locks 00:11:13.089 ************************************ 00:11:13.089 04:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:13.089 * Looking for test storage... 00:11:13.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:13.089 04:53:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:13.089 04:53:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:13.089 04:53:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:13.089 04:53:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:13.089 04:53:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:13.089 04:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.089 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:13.089 ************************************ 00:11:13.089 START TEST default_locks 00:11:13.089 ************************************ 00:11:13.089 04:53:42 -- common/autotest_common.sh@1104 -- # default_locks 00:11:13.089 04:53:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=118068 00:11:13.089 04:53:42 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:13.089 04:53:42 -- event/cpu_locks.sh@47 -- # waitforlisten 118068 00:11:13.089 04:53:42 -- common/autotest_common.sh@819 -- # '[' -z 118068 ']' 00:11:13.089 04:53:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.089 04:53:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:13.089 04:53:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.089 04:53:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:13.089 04:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:13.347 [2024-04-27 04:53:42.993033] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:13.347 [2024-04-27 04:53:42.993298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118068 ] 00:11:13.347 [2024-04-27 04:53:43.164608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.652 [2024-04-27 04:53:43.287989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:13.652 [2024-04-27 04:53:43.288260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.219 04:53:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:14.219 04:53:43 -- common/autotest_common.sh@852 -- # return 0 00:11:14.219 04:53:43 -- event/cpu_locks.sh@49 -- # locks_exist 118068 00:11:14.219 04:53:43 -- event/cpu_locks.sh@22 -- # lslocks -p 118068 00:11:14.219 04:53:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:14.477 04:53:44 -- event/cpu_locks.sh@50 -- # killprocess 118068 00:11:14.477 04:53:44 -- common/autotest_common.sh@926 -- # '[' -z 118068 ']' 00:11:14.478 04:53:44 -- common/autotest_common.sh@930 -- # kill -0 118068 00:11:14.478 04:53:44 -- common/autotest_common.sh@931 -- # uname 00:11:14.478 04:53:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.478 04:53:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118068 00:11:14.478 04:53:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:14.478 04:53:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:14.478 killing process with pid 118068 00:11:14.478 04:53:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118068' 00:11:14.478 04:53:44 -- common/autotest_common.sh@945 -- # kill 118068 00:11:14.478 04:53:44 -- common/autotest_common.sh@950 -- # wait 118068 00:11:15.414 04:53:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 118068 00:11:15.414 04:53:44 -- common/autotest_common.sh@640 -- # local es=0 00:11:15.414 04:53:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118068 00:11:15.414 04:53:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:11:15.414 04:53:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:15.414 04:53:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:11:15.414 04:53:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:15.414 04:53:44 -- common/autotest_common.sh@643 -- # waitforlisten 118068 00:11:15.414 04:53:44 -- common/autotest_common.sh@819 -- # '[' -z 118068 ']' 00:11:15.414 04:53:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.414 04:53:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:15.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.414 04:53:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.414 04:53:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:15.414 04:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118068) - No such process 00:11:15.414 ERROR: process (pid: 118068) is no longer running 00:11:15.414 04:53:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.414 04:53:44 -- common/autotest_common.sh@852 -- # return 1 00:11:15.414 04:53:44 -- common/autotest_common.sh@643 -- # es=1 00:11:15.414 04:53:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:15.414 04:53:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:15.414 04:53:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:15.414 04:53:44 -- event/cpu_locks.sh@54 -- # no_locks 00:11:15.414 04:53:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:15.414 04:53:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:15.414 04:53:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:15.414 00:11:15.414 real 0m2.038s 00:11:15.414 user 0m1.906s 00:11:15.414 sys 0m0.761s 00:11:15.414 04:53:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.414 04:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 ************************************ 00:11:15.414 END TEST default_locks 00:11:15.414 ************************************ 00:11:15.414 04:53:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:15.414 04:53:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:15.414 04:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.414 04:53:44 -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 ************************************ 00:11:15.414 START TEST default_locks_via_rpc 00:11:15.414 ************************************ 00:11:15.414 04:53:45 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:11:15.414 04:53:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=118131 00:11:15.414 04:53:45 -- event/cpu_locks.sh@63 -- # waitforlisten 118131 00:11:15.414 04:53:45 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:15.414 04:53:45 -- common/autotest_common.sh@819 -- # '[' -z 118131 ']' 00:11:15.414 04:53:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.414 04:53:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:15.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.414 04:53:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.414 04:53:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:15.414 04:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 [2024-04-27 04:53:45.087009] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:15.414 [2024-04-27 04:53:45.087246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118131 ] 00:11:15.414 [2024-04-27 04:53:45.259931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.673 [2024-04-27 04:53:45.360713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.673 [2024-04-27 04:53:45.361069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.243 04:53:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.243 04:53:46 -- common/autotest_common.sh@852 -- # return 0 00:11:16.243 04:53:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:16.243 04:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.243 04:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 04:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.243 04:53:46 -- event/cpu_locks.sh@67 -- # no_locks 00:11:16.243 04:53:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:16.243 04:53:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:16.243 04:53:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:16.243 04:53:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:16.243 04:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.243 04:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 04:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.243 04:53:46 -- event/cpu_locks.sh@71 -- # locks_exist 118131 00:11:16.243 04:53:46 -- event/cpu_locks.sh@22 -- # lslocks -p 118131 00:11:16.243 04:53:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:16.502 04:53:46 -- event/cpu_locks.sh@73 -- # killprocess 118131 00:11:16.502 04:53:46 -- common/autotest_common.sh@926 -- # '[' -z 118131 ']' 00:11:16.502 04:53:46 -- common/autotest_common.sh@930 -- # kill -0 118131 00:11:16.502 04:53:46 -- common/autotest_common.sh@931 -- # uname 00:11:16.502 04:53:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:16.503 04:53:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118131 00:11:16.503 04:53:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:16.503 04:53:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:16.503 killing process with pid 118131 00:11:16.503 04:53:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118131' 00:11:16.503 04:53:46 -- common/autotest_common.sh@945 -- # kill 118131 00:11:16.503 04:53:46 -- common/autotest_common.sh@950 -- # wait 118131 00:11:17.440 00:11:17.440 real 0m2.081s 00:11:17.440 user 0m2.001s 00:11:17.440 sys 0m0.762s 00:11:17.440 04:53:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.440 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.440 ************************************ 00:11:17.440 END TEST default_locks_via_rpc 00:11:17.440 ************************************ 00:11:17.440 04:53:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:17.440 04:53:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:17.440 04:53:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.440 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.440 ************************************ 00:11:17.440 START TEST non_locking_app_on_locked_coremask 00:11:17.440 ************************************ 00:11:17.440 04:53:47 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:11:17.440 04:53:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=118191 00:11:17.440 04:53:47 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:17.440 04:53:47 -- event/cpu_locks.sh@81 -- # waitforlisten 118191 /var/tmp/spdk.sock 00:11:17.440 04:53:47 -- common/autotest_common.sh@819 -- # '[' -z 118191 ']' 00:11:17.440 04:53:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.440 04:53:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:17.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.440 04:53:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.440 04:53:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:17.440 04:53:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.440 [2024-04-27 04:53:47.219041] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:17.440 [2024-04-27 04:53:47.219274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118191 ] 00:11:17.700 [2024-04-27 04:53:47.380438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.700 [2024-04-27 04:53:47.497044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.700 [2024-04-27 04:53:47.497347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.637 04:53:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:18.637 04:53:48 -- common/autotest_common.sh@852 -- # return 0 00:11:18.637 04:53:48 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=118212 00:11:18.637 04:53:48 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:18.637 04:53:48 -- event/cpu_locks.sh@85 -- # waitforlisten 118212 /var/tmp/spdk2.sock 00:11:18.637 04:53:48 -- common/autotest_common.sh@819 -- # '[' -z 118212 ']' 00:11:18.637 04:53:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.637 04:53:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:18.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.637 04:53:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.637 04:53:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:18.637 04:53:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.637 [2024-04-27 04:53:48.330554] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:18.637 [2024-04-27 04:53:48.330822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118212 ] 00:11:18.637 [2024-04-27 04:53:48.514691] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:18.637 [2024-04-27 04:53:48.514777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.896 [2024-04-27 04:53:48.746504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:18.896 [2024-04-27 04:53:48.746785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.274 04:53:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:20.274 04:53:50 -- common/autotest_common.sh@852 -- # return 0 00:11:20.274 04:53:50 -- event/cpu_locks.sh@87 -- # locks_exist 118191 00:11:20.274 04:53:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.274 04:53:50 -- event/cpu_locks.sh@22 -- # lslocks -p 118191 00:11:20.841 04:53:50 -- event/cpu_locks.sh@89 -- # killprocess 118191 00:11:20.841 04:53:50 -- common/autotest_common.sh@926 -- # '[' -z 118191 ']' 00:11:20.841 04:53:50 -- common/autotest_common.sh@930 -- # kill -0 118191 00:11:20.841 04:53:50 -- common/autotest_common.sh@931 -- # uname 00:11:20.841 04:53:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:20.841 04:53:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118191 00:11:20.841 04:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:20.841 04:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:20.841 killing process with pid 118191 00:11:20.841 04:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118191' 00:11:20.841 04:53:50 -- common/autotest_common.sh@945 -- # kill 118191 00:11:20.841 04:53:50 -- common/autotest_common.sh@950 -- # wait 118191 00:11:22.254 04:53:51 -- event/cpu_locks.sh@90 -- # killprocess 118212 00:11:22.254 04:53:51 -- common/autotest_common.sh@926 -- # '[' -z 118212 ']' 00:11:22.254 04:53:51 -- common/autotest_common.sh@930 -- # kill -0 118212 00:11:22.254 04:53:51 -- common/autotest_common.sh@931 -- # uname 00:11:22.254 04:53:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:22.254 04:53:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118212 00:11:22.254 04:53:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:22.254 04:53:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:22.254 killing process with pid 118212 00:11:22.254 04:53:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118212' 00:11:22.254 04:53:51 -- common/autotest_common.sh@945 -- # kill 118212 00:11:22.254 04:53:51 -- common/autotest_common.sh@950 -- # wait 118212 00:11:22.852 00:11:22.852 real 0m5.515s 00:11:22.852 user 0m5.871s 00:11:22.852 sys 0m1.532s 00:11:22.852 04:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.852 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.852 ************************************ 00:11:22.852 END TEST non_locking_app_on_locked_coremask 00:11:22.852 ************************************ 00:11:22.852 04:53:52 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:22.852 04:53:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:22.852 04:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.852 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.852 ************************************ 00:11:22.852 START TEST locking_app_on_unlocked_coremask 00:11:22.852 ************************************ 00:11:22.852 04:53:52 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:11:22.852 04:53:52 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=118307 00:11:22.852 04:53:52 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:22.852 04:53:52 -- event/cpu_locks.sh@99 -- # waitforlisten 118307 /var/tmp/spdk.sock 00:11:22.852 04:53:52 -- common/autotest_common.sh@819 -- # '[' -z 118307 ']' 00:11:22.852 04:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.852 04:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:22.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.852 04:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.852 04:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:22.852 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:11:23.135 [2024-04-27 04:53:52.788252] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:23.135 [2024-04-27 04:53:52.788507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118307 ] 00:11:23.135 [2024-04-27 04:53:52.945773] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:23.135 [2024-04-27 04:53:52.945855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.392 [2024-04-27 04:53:53.059222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:23.392 [2024-04-27 04:53:53.059506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.958 04:53:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:23.958 04:53:53 -- common/autotest_common.sh@852 -- # return 0 00:11:23.958 04:53:53 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=118331 00:11:23.958 04:53:53 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:23.958 04:53:53 -- event/cpu_locks.sh@103 -- # waitforlisten 118331 /var/tmp/spdk2.sock 00:11:23.958 04:53:53 -- common/autotest_common.sh@819 -- # '[' -z 118331 ']' 00:11:23.958 04:53:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:23.958 04:53:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:23.958 04:53:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:23.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:23.958 04:53:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:23.958 04:53:53 -- common/autotest_common.sh@10 -- # set +x 00:11:23.958 [2024-04-27 04:53:53.778620] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:23.958 [2024-04-27 04:53:53.778901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118331 ] 00:11:24.217 [2024-04-27 04:53:53.942081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.475 [2024-04-27 04:53:54.164607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:24.475 [2024-04-27 04:53:54.164864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.854 04:53:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:25.854 04:53:55 -- common/autotest_common.sh@852 -- # return 0 00:11:25.854 04:53:55 -- event/cpu_locks.sh@105 -- # locks_exist 118331 00:11:25.854 04:53:55 -- event/cpu_locks.sh@22 -- # lslocks -p 118331 00:11:25.854 04:53:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:26.113 04:53:55 -- event/cpu_locks.sh@107 -- # killprocess 118307 00:11:26.113 04:53:55 -- common/autotest_common.sh@926 -- # '[' -z 118307 ']' 00:11:26.113 04:53:55 -- common/autotest_common.sh@930 -- # kill -0 118307 00:11:26.113 04:53:55 -- common/autotest_common.sh@931 -- # uname 00:11:26.113 04:53:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:26.113 04:53:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118307 00:11:26.113 04:53:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:26.113 04:53:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:26.113 killing process with pid 118307 00:11:26.113 04:53:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118307' 00:11:26.113 04:53:55 -- common/autotest_common.sh@945 -- # kill 118307 00:11:26.113 04:53:55 -- common/autotest_common.sh@950 -- # wait 118307 00:11:28.018 04:53:57 -- event/cpu_locks.sh@108 -- # killprocess 118331 00:11:28.018 04:53:57 -- common/autotest_common.sh@926 -- # '[' -z 118331 ']' 00:11:28.018 04:53:57 -- common/autotest_common.sh@930 -- # kill -0 118331 00:11:28.018 04:53:57 -- common/autotest_common.sh@931 -- # uname 00:11:28.018 04:53:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:28.018 04:53:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118331 00:11:28.018 04:53:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:28.018 04:53:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:28.018 killing process with pid 118331 00:11:28.018 04:53:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118331' 00:11:28.018 04:53:57 -- common/autotest_common.sh@945 -- # kill 118331 00:11:28.018 04:53:57 -- common/autotest_common.sh@950 -- # wait 118331 00:11:28.585 00:11:28.585 real 0m5.453s 00:11:28.585 user 0m5.641s 00:11:28.585 sys 0m1.527s 00:11:28.585 04:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.585 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.585 ************************************ 00:11:28.585 END TEST locking_app_on_unlocked_coremask 00:11:28.585 ************************************ 00:11:28.585 04:53:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:28.585 04:53:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.585 04:53:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.585 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.585 ************************************ 00:11:28.585 START TEST locking_app_on_locked_coremask 00:11:28.585 ************************************ 00:11:28.585 04:53:58 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:11:28.585 04:53:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=118419 00:11:28.585 04:53:58 -- event/cpu_locks.sh@116 -- # waitforlisten 118419 /var/tmp/spdk.sock 00:11:28.585 04:53:58 -- common/autotest_common.sh@819 -- # '[' -z 118419 ']' 00:11:28.585 04:53:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.585 04:53:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.585 04:53:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.585 04:53:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:28.585 04:53:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.585 04:53:58 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:28.585 [2024-04-27 04:53:58.305976] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:28.585 [2024-04-27 04:53:58.306243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118419 ] 00:11:28.843 [2024-04-27 04:53:58.480020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.843 [2024-04-27 04:53:58.608743] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.843 [2024-04-27 04:53:58.609278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.409 04:53:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:29.409 04:53:59 -- common/autotest_common.sh@852 -- # return 0 00:11:29.409 04:53:59 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=118440 00:11:29.409 04:53:59 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:29.409 04:53:59 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 118440 /var/tmp/spdk2.sock 00:11:29.409 04:53:59 -- common/autotest_common.sh@640 -- # local es=0 00:11:29.409 04:53:59 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118440 /var/tmp/spdk2.sock 00:11:29.409 04:53:59 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:11:29.409 04:53:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:29.409 04:53:59 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:11:29.409 04:53:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:29.409 04:53:59 -- common/autotest_common.sh@643 -- # waitforlisten 118440 /var/tmp/spdk2.sock 00:11:29.409 04:53:59 -- common/autotest_common.sh@819 -- # '[' -z 118440 ']' 00:11:29.409 04:53:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:29.409 04:53:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:29.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:29.409 04:53:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:29.409 04:53:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:29.409 04:53:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.667 [2024-04-27 04:53:59.353012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:29.667 [2024-04-27 04:53:59.355474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118440 ] 00:11:29.667 [2024-04-27 04:53:59.528687] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 118419 has claimed it. 00:11:29.667 [2024-04-27 04:53:59.528857] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:30.235 ERROR: process (pid: 118440) is no longer running 00:11:30.235 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118440) - No such process 00:11:30.235 04:54:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:30.235 04:54:00 -- common/autotest_common.sh@852 -- # return 1 00:11:30.235 04:54:00 -- common/autotest_common.sh@643 -- # es=1 00:11:30.235 04:54:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:30.235 04:54:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:30.235 04:54:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:30.235 04:54:00 -- event/cpu_locks.sh@122 -- # locks_exist 118419 00:11:30.235 04:54:00 -- event/cpu_locks.sh@22 -- # lslocks -p 118419 00:11:30.235 04:54:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:30.493 04:54:00 -- event/cpu_locks.sh@124 -- # killprocess 118419 00:11:30.493 04:54:00 -- common/autotest_common.sh@926 -- # '[' -z 118419 ']' 00:11:30.493 04:54:00 -- common/autotest_common.sh@930 -- # kill -0 118419 00:11:30.493 04:54:00 -- common/autotest_common.sh@931 -- # uname 00:11:30.493 04:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.493 04:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118419 00:11:30.493 04:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.493 killing process with pid 118419 00:11:30.493 04:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.493 04:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118419' 00:11:30.493 04:54:00 -- common/autotest_common.sh@945 -- # kill 118419 00:11:30.493 04:54:00 -- common/autotest_common.sh@950 -- # wait 118419 00:11:31.429 ************************************ 00:11:31.429 END TEST locking_app_on_locked_coremask 00:11:31.429 ************************************ 00:11:31.429 00:11:31.429 real 0m2.857s 00:11:31.429 user 0m3.035s 00:11:31.429 sys 0m0.902s 00:11:31.430 04:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.430 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.430 04:54:01 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:31.430 04:54:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:31.430 04:54:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.430 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.430 ************************************ 00:11:31.430 START TEST locking_overlapped_coremask 00:11:31.430 ************************************ 00:11:31.430 04:54:01 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:11:31.430 04:54:01 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=118497 00:11:31.430 04:54:01 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:31.430 04:54:01 -- event/cpu_locks.sh@133 -- # waitforlisten 118497 /var/tmp/spdk.sock 00:11:31.430 04:54:01 -- common/autotest_common.sh@819 -- # '[' -z 118497 ']' 00:11:31.430 04:54:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.430 04:54:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:31.430 04:54:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.430 04:54:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:31.430 04:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.430 [2024-04-27 04:54:01.218892] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:31.430 [2024-04-27 04:54:01.219945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118497 ] 00:11:31.687 [2024-04-27 04:54:01.400352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.687 [2024-04-27 04:54:01.523582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.687 [2024-04-27 04:54:01.524236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.687 [2024-04-27 04:54:01.524376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.687 [2024-04-27 04:54:01.524382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.621 04:54:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:32.621 04:54:02 -- common/autotest_common.sh@852 -- # return 0 00:11:32.621 04:54:02 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=118520 00:11:32.621 04:54:02 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 118520 /var/tmp/spdk2.sock 00:11:32.621 04:54:02 -- common/autotest_common.sh@640 -- # local es=0 00:11:32.621 04:54:02 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 118520 /var/tmp/spdk2.sock 00:11:32.621 04:54:02 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:11:32.622 04:54:02 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:32.622 04:54:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:32.622 04:54:02 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:11:32.622 04:54:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:32.622 04:54:02 -- common/autotest_common.sh@643 -- # waitforlisten 118520 /var/tmp/spdk2.sock 00:11:32.622 04:54:02 -- common/autotest_common.sh@819 -- # '[' -z 118520 ']' 00:11:32.622 04:54:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:32.622 04:54:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:32.622 04:54:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:32.622 04:54:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.622 04:54:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.622 [2024-04-27 04:54:02.283924] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:32.622 [2024-04-27 04:54:02.284674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118520 ] 00:11:32.622 [2024-04-27 04:54:02.492003] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118497 has claimed it. 00:11:32.622 [2024-04-27 04:54:02.492130] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:33.188 ERROR: process (pid: 118520) is no longer running 00:11:33.188 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (118520) - No such process 00:11:33.188 04:54:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.188 04:54:02 -- common/autotest_common.sh@852 -- # return 1 00:11:33.188 04:54:02 -- common/autotest_common.sh@643 -- # es=1 00:11:33.188 04:54:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:33.188 04:54:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:33.188 04:54:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:33.188 04:54:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:33.188 04:54:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:33.189 04:54:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:33.189 04:54:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:33.189 04:54:02 -- event/cpu_locks.sh@141 -- # killprocess 118497 00:11:33.189 04:54:02 -- common/autotest_common.sh@926 -- # '[' -z 118497 ']' 00:11:33.189 04:54:02 -- common/autotest_common.sh@930 -- # kill -0 118497 00:11:33.189 04:54:02 -- common/autotest_common.sh@931 -- # uname 00:11:33.189 04:54:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:33.189 04:54:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118497 00:11:33.189 04:54:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:33.189 killing process with pid 118497 00:11:33.189 04:54:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:33.189 04:54:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118497' 00:11:33.189 04:54:02 -- common/autotest_common.sh@945 -- # kill 118497 00:11:33.189 04:54:02 -- common/autotest_common.sh@950 -- # wait 118497 00:11:34.147 ************************************ 00:11:34.147 END TEST locking_overlapped_coremask 00:11:34.147 ************************************ 00:11:34.147 00:11:34.147 real 0m2.566s 00:11:34.147 user 0m6.686s 00:11:34.147 sys 0m0.758s 00:11:34.147 04:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.147 04:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:34.147 04:54:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:34.147 04:54:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:34.147 04:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.147 04:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:34.148 ************************************ 00:11:34.148 START TEST locking_overlapped_coremask_via_rpc 00:11:34.148 ************************************ 00:11:34.148 04:54:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:11:34.148 04:54:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=118574 00:11:34.148 04:54:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:34.148 04:54:03 -- event/cpu_locks.sh@149 -- # waitforlisten 118574 /var/tmp/spdk.sock 00:11:34.148 04:54:03 -- common/autotest_common.sh@819 -- # '[' -z 118574 ']' 00:11:34.148 04:54:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.148 04:54:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.148 04:54:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.148 04:54:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.148 04:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:34.148 [2024-04-27 04:54:03.846430] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:34.148 [2024-04-27 04:54:03.846992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118574 ] 00:11:34.148 [2024-04-27 04:54:04.028616] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:34.148 [2024-04-27 04:54:04.028897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.421 [2024-04-27 04:54:04.119012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.421 [2024-04-27 04:54:04.119682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.421 [2024-04-27 04:54:04.119782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.421 [2024-04-27 04:54:04.119793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:34.989 04:54:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:34.989 04:54:04 -- common/autotest_common.sh@852 -- # return 0 00:11:34.989 04:54:04 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=118597 00:11:34.989 04:54:04 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:34.989 04:54:04 -- event/cpu_locks.sh@153 -- # waitforlisten 118597 /var/tmp/spdk2.sock 00:11:34.989 04:54:04 -- common/autotest_common.sh@819 -- # '[' -z 118597 ']' 00:11:34.989 04:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:34.989 04:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.989 04:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:34.989 04:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.989 04:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:34.989 [2024-04-27 04:54:04.838978] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:34.989 [2024-04-27 04:54:04.839408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118597 ] 00:11:35.247 [2024-04-27 04:54:05.033975] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:35.247 [2024-04-27 04:54:05.034069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.506 [2024-04-27 04:54:05.224608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:35.506 [2024-04-27 04:54:05.232992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.506 [2024-04-27 04:54:05.244772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.506 [2024-04-27 04:54:05.244776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:36.880 04:54:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:36.880 04:54:06 -- common/autotest_common.sh@852 -- # return 0 00:11:36.880 04:54:06 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:36.880 04:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.880 04:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:36.880 04:54:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.880 04:54:06 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:36.880 04:54:06 -- common/autotest_common.sh@640 -- # local es=0 00:11:36.880 04:54:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:36.881 04:54:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:11:36.881 04:54:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.881 04:54:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:11:36.881 04:54:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.881 04:54:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:36.881 04:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.881 04:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 [2024-04-27 04:54:06.544738] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118574 has claimed it. 00:11:36.881 request: 00:11:36.881 { 00:11:36.881 "method": "framework_enable_cpumask_locks", 00:11:36.881 "req_id": 1 00:11:36.881 } 00:11:36.881 Got JSON-RPC error response 00:11:36.881 response: 00:11:36.881 { 00:11:36.881 "code": -32603, 00:11:36.881 "message": "Failed to claim CPU core: 2" 00:11:36.881 } 00:11:36.881 04:54:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:11:36.881 04:54:06 -- common/autotest_common.sh@643 -- # es=1 00:11:36.881 04:54:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:36.881 04:54:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:36.881 04:54:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:36.881 04:54:06 -- event/cpu_locks.sh@158 -- # waitforlisten 118574 /var/tmp/spdk.sock 00:11:36.881 04:54:06 -- common/autotest_common.sh@819 -- # '[' -z 118574 ']' 00:11:36.881 04:54:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.881 04:54:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:36.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.881 04:54:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.881 04:54:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:36.881 04:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:37.139 04:54:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:37.139 04:54:06 -- common/autotest_common.sh@852 -- # return 0 00:11:37.139 04:54:06 -- event/cpu_locks.sh@159 -- # waitforlisten 118597 /var/tmp/spdk2.sock 00:11:37.139 04:54:06 -- common/autotest_common.sh@819 -- # '[' -z 118597 ']' 00:11:37.139 04:54:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:37.139 04:54:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:37.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:37.139 04:54:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:37.139 04:54:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:37.139 04:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:37.397 04:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:37.397 04:54:07 -- common/autotest_common.sh@852 -- # return 0 00:11:37.397 04:54:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:37.397 04:54:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:37.397 04:54:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:37.397 04:54:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:37.397 00:11:37.397 real 0m3.310s 00:11:37.397 user 0m1.533s 00:11:37.397 sys 0m0.214s 00:11:37.397 04:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.397 ************************************ 00:11:37.397 END TEST locking_overlapped_coremask_via_rpc 00:11:37.397 ************************************ 00:11:37.397 04:54:07 -- common/autotest_common.sh@10 -- # set +x 00:11:37.397 04:54:07 -- event/cpu_locks.sh@174 -- # cleanup 00:11:37.397 04:54:07 -- event/cpu_locks.sh@15 -- # [[ -z 118574 ]] 00:11:37.397 04:54:07 -- event/cpu_locks.sh@15 -- # killprocess 118574 00:11:37.397 04:54:07 -- common/autotest_common.sh@926 -- # '[' -z 118574 ']' 00:11:37.397 04:54:07 -- common/autotest_common.sh@930 -- # kill -0 118574 00:11:37.397 04:54:07 -- common/autotest_common.sh@931 -- # uname 00:11:37.397 04:54:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:37.397 04:54:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118574 00:11:37.397 04:54:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:37.397 killing process with pid 118574 00:11:37.397 04:54:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:37.397 04:54:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118574' 00:11:37.397 04:54:07 -- common/autotest_common.sh@945 -- # kill 118574 00:11:37.397 04:54:07 -- common/autotest_common.sh@950 -- # wait 118574 00:11:38.331 04:54:08 -- event/cpu_locks.sh@16 -- # [[ -z 118597 ]] 00:11:38.331 04:54:08 -- event/cpu_locks.sh@16 -- # killprocess 118597 00:11:38.331 04:54:08 -- common/autotest_common.sh@926 -- # '[' -z 118597 ']' 00:11:38.331 04:54:08 -- common/autotest_common.sh@930 -- # kill -0 118597 00:11:38.331 04:54:08 -- common/autotest_common.sh@931 -- # uname 00:11:38.331 04:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:38.331 04:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118597 00:11:38.331 04:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:38.331 04:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:38.331 killing process with pid 118597 00:11:38.331 04:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118597' 00:11:38.331 04:54:08 -- common/autotest_common.sh@945 -- # kill 118597 00:11:38.331 04:54:08 -- common/autotest_common.sh@950 -- # wait 118597 00:11:39.292 04:54:09 -- event/cpu_locks.sh@18 -- # rm -f 00:11:39.292 Process with pid 118574 is not found 00:11:39.292 Process with pid 118597 is not found 00:11:39.292 04:54:09 -- event/cpu_locks.sh@1 -- # cleanup 00:11:39.292 04:54:09 -- event/cpu_locks.sh@15 -- # [[ -z 118574 ]] 00:11:39.292 04:54:09 -- event/cpu_locks.sh@15 -- # killprocess 118574 00:11:39.292 04:54:09 -- common/autotest_common.sh@926 -- # '[' -z 118574 ']' 00:11:39.292 04:54:09 -- common/autotest_common.sh@930 -- # kill -0 118574 00:11:39.292 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (118574) - No such process 00:11:39.292 04:54:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 118574 is not found' 00:11:39.292 04:54:09 -- event/cpu_locks.sh@16 -- # [[ -z 118597 ]] 00:11:39.292 04:54:09 -- event/cpu_locks.sh@16 -- # killprocess 118597 00:11:39.292 04:54:09 -- common/autotest_common.sh@926 -- # '[' -z 118597 ']' 00:11:39.292 04:54:09 -- common/autotest_common.sh@930 -- # kill -0 118597 00:11:39.293 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (118597) - No such process 00:11:39.293 04:54:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 118597 is not found' 00:11:39.293 04:54:09 -- event/cpu_locks.sh@18 -- # rm -f 00:11:39.293 ************************************ 00:11:39.293 END TEST cpu_locks 00:11:39.293 ************************************ 00:11:39.293 00:11:39.293 real 0m26.331s 00:11:39.293 user 0m46.291s 00:11:39.293 sys 0m7.712s 00:11:39.293 04:54:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.293 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.560 ************************************ 00:11:39.560 END TEST event 00:11:39.560 ************************************ 00:11:39.560 00:11:39.560 real 0m56.245s 00:11:39.560 user 1m47.132s 00:11:39.560 sys 0m12.224s 00:11:39.560 04:54:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.560 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.560 04:54:09 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:39.560 04:54:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:39.560 04:54:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.560 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.560 ************************************ 00:11:39.560 START TEST thread 00:11:39.560 ************************************ 00:11:39.560 04:54:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:39.560 * Looking for test storage... 00:11:39.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:39.560 04:54:09 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:39.560 04:54:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:39.560 04:54:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.560 04:54:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.560 ************************************ 00:11:39.560 START TEST thread_poller_perf 00:11:39.560 ************************************ 00:11:39.560 04:54:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:39.560 [2024-04-27 04:54:09.360385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:39.560 [2024-04-27 04:54:09.360677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118754 ] 00:11:39.819 [2024-04-27 04:54:09.538562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.819 [2024-04-27 04:54:09.692781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:41.194 ====================================== 00:11:41.194 busy:2219978916 (cyc) 00:11:41.194 total_run_count: 305000 00:11:41.194 tsc_hz: 2200000000 (cyc) 00:11:41.194 ====================================== 00:11:41.194 poller_cost: 7278 (cyc), 3308 (nsec) 00:11:41.194 00:11:41.194 real 0m1.534s 00:11:41.194 user 0m1.297s 00:11:41.194 sys 0m0.136s 00:11:41.194 04:54:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.194 04:54:10 -- common/autotest_common.sh@10 -- # set +x 00:11:41.194 ************************************ 00:11:41.194 END TEST thread_poller_perf 00:11:41.194 ************************************ 00:11:41.194 04:54:10 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:41.194 04:54:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:41.194 04:54:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.194 04:54:10 -- common/autotest_common.sh@10 -- # set +x 00:11:41.194 ************************************ 00:11:41.194 START TEST thread_poller_perf 00:11:41.194 ************************************ 00:11:41.194 04:54:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:41.194 [2024-04-27 04:54:10.941096] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:41.194 [2024-04-27 04:54:10.941363] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118800 ] 00:11:41.452 [2024-04-27 04:54:11.114206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.452 [2024-04-27 04:54:11.246298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.452 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:42.828 ====================================== 00:11:42.828 busy:2205801261 (cyc) 00:11:42.828 total_run_count: 3961000 00:11:42.828 tsc_hz: 2200000000 (cyc) 00:11:42.828 ====================================== 00:11:42.828 poller_cost: 556 (cyc), 252 (nsec) 00:11:42.828 00:11:42.828 real 0m1.511s 00:11:42.828 user 0m1.262s 00:11:42.828 sys 0m0.148s 00:11:42.828 04:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.828 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 ************************************ 00:11:42.828 END TEST thread_poller_perf 00:11:42.828 ************************************ 00:11:42.828 04:54:12 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:42.828 04:54:12 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:42.828 04:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:42.828 04:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.828 04:54:12 -- common/autotest_common.sh@10 -- # set +x 00:11:42.828 ************************************ 00:11:42.828 START TEST thread_spdk_lock 00:11:42.828 ************************************ 00:11:42.828 04:54:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:42.828 [2024-04-27 04:54:12.508612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:42.828 [2024-04-27 04:54:12.508870] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118836 ] 00:11:42.828 [2024-04-27 04:54:12.683267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.098 [2024-04-27 04:54:12.804860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.098 [2024-04-27 04:54:12.804866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.668 [2024-04-27 04:54:13.332150] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:43.668 [2024-04-27 04:54:13.332316] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:43.668 [2024-04-27 04:54:13.332381] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55771df12140 00:11:43.668 [2024-04-27 04:54:13.334075] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:43.668 [2024-04-27 04:54:13.334187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:43.668 [2024-04-27 04:54:13.334266] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:43.668 Starting test contend 00:11:43.668 Worker Delay Wait us Hold us Total us 00:11:43.668 0 3 123128 197097 320226 00:11:43.668 1 5 59330 298725 358056 00:11:43.668 PASS test contend 00:11:43.668 Starting test hold_by_poller 00:11:43.668 PASS test hold_by_poller 00:11:43.668 Starting test hold_by_message 00:11:43.668 PASS test hold_by_message 00:11:43.668 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:43.668 100014 assertions passed 00:11:43.668 0 assertions failed 00:11:43.668 00:11:43.668 real 0m1.016s 00:11:43.668 user 0m1.302s 00:11:43.668 sys 0m0.145s 00:11:43.668 04:54:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.668 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:43.668 ************************************ 00:11:43.668 END TEST thread_spdk_lock 00:11:43.668 ************************************ 00:11:43.668 00:11:43.668 real 0m4.285s 00:11:43.668 user 0m3.987s 00:11:43.668 sys 0m0.529s 00:11:43.668 04:54:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.668 ************************************ 00:11:43.668 END TEST thread 00:11:43.668 ************************************ 00:11:43.668 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 04:54:13 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:43.927 04:54:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:43.927 04:54:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.927 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 ************************************ 00:11:43.927 START TEST accel 00:11:43.927 ************************************ 00:11:43.927 04:54:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:43.927 * Looking for test storage... 00:11:43.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:43.927 04:54:13 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:11:43.927 04:54:13 -- accel/accel.sh@74 -- # get_expected_opcs 00:11:43.927 04:54:13 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:43.927 04:54:13 -- accel/accel.sh@59 -- # spdk_tgt_pid=118914 00:11:43.927 04:54:13 -- accel/accel.sh@60 -- # waitforlisten 118914 00:11:43.927 04:54:13 -- common/autotest_common.sh@819 -- # '[' -z 118914 ']' 00:11:43.927 04:54:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.927 04:54:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.927 04:54:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.927 04:54:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.927 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 04:54:13 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:43.927 04:54:13 -- accel/accel.sh@58 -- # build_accel_config 00:11:43.927 04:54:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.927 04:54:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.927 04:54:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.927 04:54:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.927 04:54:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.927 04:54:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.927 04:54:13 -- accel/accel.sh@42 -- # jq -r . 00:11:43.927 [2024-04-27 04:54:13.736719] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:43.927 [2024-04-27 04:54:13.736979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118914 ] 00:11:44.186 [2024-04-27 04:54:13.909274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.186 [2024-04-27 04:54:14.046114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:44.186 [2024-04-27 04:54:14.046409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.123 04:54:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:45.123 04:54:14 -- common/autotest_common.sh@852 -- # return 0 00:11:45.123 04:54:14 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:45.123 04:54:14 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:11:45.123 04:54:14 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:45.123 04:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.123 04:54:14 -- common/autotest_common.sh@10 -- # set +x 00:11:45.123 04:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # IFS== 00:11:45.123 04:54:14 -- accel/accel.sh@64 -- # read -r opc module 00:11:45.123 04:54:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:11:45.123 04:54:14 -- accel/accel.sh@67 -- # killprocess 118914 00:11:45.123 04:54:14 -- common/autotest_common.sh@926 -- # '[' -z 118914 ']' 00:11:45.123 04:54:14 -- common/autotest_common.sh@930 -- # kill -0 118914 00:11:45.123 04:54:14 -- common/autotest_common.sh@931 -- # uname 00:11:45.123 04:54:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:45.123 04:54:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118914 00:11:45.123 04:54:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:45.123 04:54:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:45.123 killing process with pid 118914 00:11:45.123 04:54:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118914' 00:11:45.123 04:54:14 -- common/autotest_common.sh@945 -- # kill 118914 00:11:45.123 04:54:14 -- common/autotest_common.sh@950 -- # wait 118914 00:11:46.538 04:54:16 -- accel/accel.sh@68 -- # trap - ERR 00:11:46.538 04:54:16 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:11:46.538 04:54:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:46.538 04:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.538 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.538 04:54:16 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:11:46.538 04:54:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:46.538 04:54:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.538 04:54:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.538 04:54:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.538 04:54:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.538 04:54:16 -- accel/accel.sh@42 -- # jq -r . 00:11:46.538 04:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.538 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.538 04:54:16 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:46.538 04:54:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:46.538 04:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.538 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.538 ************************************ 00:11:46.538 START TEST accel_missing_filename 00:11:46.538 ************************************ 00:11:46.538 04:54:16 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:11:46.538 04:54:16 -- common/autotest_common.sh@640 -- # local es=0 00:11:46.538 04:54:16 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:46.538 04:54:16 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:11:46.538 04:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.538 04:54:16 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:11:46.538 04:54:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.538 04:54:16 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:11:46.538 04:54:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:46.538 04:54:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.538 04:54:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.538 04:54:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.538 04:54:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.538 04:54:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.538 04:54:16 -- accel/accel.sh@42 -- # jq -r . 00:11:46.538 [2024-04-27 04:54:16.155805] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:46.538 [2024-04-27 04:54:16.156095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118998 ] 00:11:46.538 [2024-04-27 04:54:16.326919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.797 [2024-04-27 04:54:16.451213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.797 [2024-04-27 04:54:16.587689] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:47.057 [2024-04-27 04:54:16.863102] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:11:47.316 A filename is required. 00:11:47.316 04:54:17 -- common/autotest_common.sh@643 -- # es=234 00:11:47.316 04:54:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:47.316 04:54:17 -- common/autotest_common.sh@652 -- # es=106 00:11:47.316 04:54:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:11:47.316 04:54:17 -- common/autotest_common.sh@660 -- # es=1 00:11:47.316 04:54:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:47.316 00:11:47.316 real 0m0.953s 00:11:47.316 user 0m0.591s 00:11:47.316 sys 0m0.307s 00:11:47.316 04:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.316 04:54:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.316 ************************************ 00:11:47.316 END TEST accel_missing_filename 00:11:47.316 ************************************ 00:11:47.316 04:54:17 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:47.316 04:54:17 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:47.316 04:54:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.316 04:54:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.316 ************************************ 00:11:47.316 START TEST accel_compress_verify 00:11:47.316 ************************************ 00:11:47.316 04:54:17 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:47.316 04:54:17 -- common/autotest_common.sh@640 -- # local es=0 00:11:47.316 04:54:17 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:47.316 04:54:17 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:11:47.316 04:54:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:47.316 04:54:17 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:11:47.316 04:54:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:47.316 04:54:17 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:47.316 04:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:47.316 04:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:47.316 04:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:47.316 04:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:47.316 04:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:47.316 04:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:47.316 04:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:47.316 04:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:47.316 04:54:17 -- accel/accel.sh@42 -- # jq -r . 00:11:47.316 [2024-04-27 04:54:17.154321] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:47.316 [2024-04-27 04:54:17.154579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119037 ] 00:11:47.575 [2024-04-27 04:54:17.326215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.575 [2024-04-27 04:54:17.433151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.834 [2024-04-27 04:54:17.573285] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.092 [2024-04-27 04:54:17.833231] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:11:48.351 00:11:48.351 Compression does not support the verify option, aborting. 00:11:48.351 04:54:17 -- common/autotest_common.sh@643 -- # es=161 00:11:48.351 04:54:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:48.351 04:54:17 -- common/autotest_common.sh@652 -- # es=33 00:11:48.351 04:54:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:11:48.351 04:54:17 -- common/autotest_common.sh@660 -- # es=1 00:11:48.351 04:54:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:48.351 00:11:48.351 real 0m0.877s 00:11:48.351 user 0m0.553s 00:11:48.351 sys 0m0.271s 00:11:48.351 04:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.351 04:54:17 -- common/autotest_common.sh@10 -- # set +x 00:11:48.351 ************************************ 00:11:48.351 END TEST accel_compress_verify 00:11:48.351 ************************************ 00:11:48.351 04:54:18 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:48.351 04:54:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:48.351 04:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.351 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.351 ************************************ 00:11:48.351 START TEST accel_wrong_workload 00:11:48.351 ************************************ 00:11:48.351 04:54:18 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:11:48.351 04:54:18 -- common/autotest_common.sh@640 -- # local es=0 00:11:48.351 04:54:18 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:48.351 04:54:18 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:11:48.351 04:54:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:48.351 04:54:18 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:11:48.352 04:54:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:48.352 04:54:18 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:11:48.352 04:54:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:48.352 04:54:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.352 04:54:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.352 04:54:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.352 04:54:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.352 04:54:18 -- accel/accel.sh@42 -- # jq -r . 00:11:48.352 Unsupported workload type: foobar 00:11:48.352 [2024-04-27 04:54:18.081892] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:48.352 accel_perf options: 00:11:48.352 [-h help message] 00:11:48.352 [-q queue depth per core] 00:11:48.352 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:48.352 [-T number of threads per core 00:11:48.352 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:48.352 [-t time in seconds] 00:11:48.352 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:48.352 [ dif_verify, , dif_generate, dif_generate_copy 00:11:48.352 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:48.352 [-l for compress/decompress workloads, name of uncompressed input file 00:11:48.352 [-S for crc32c workload, use this seed value (default 0) 00:11:48.352 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:48.352 [-f for fill workload, use this BYTE value (default 255) 00:11:48.352 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:48.352 [-y verify result if this switch is on] 00:11:48.352 [-a tasks to allocate per core (default: same value as -q)] 00:11:48.352 Can be used to spread operations across a wider range of memory. 00:11:48.352 04:54:18 -- common/autotest_common.sh@643 -- # es=1 00:11:48.352 04:54:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:48.352 04:54:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:48.352 04:54:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:48.352 00:11:48.352 real 0m0.057s 00:11:48.352 user 0m0.082s 00:11:48.352 sys 0m0.027s 00:11:48.352 04:54:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.352 ************************************ 00:11:48.352 END TEST accel_wrong_workload 00:11:48.352 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.352 ************************************ 00:11:48.352 04:54:18 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:48.352 04:54:18 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:48.352 04:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.352 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.352 ************************************ 00:11:48.352 START TEST accel_negative_buffers 00:11:48.352 ************************************ 00:11:48.352 04:54:18 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:48.352 04:54:18 -- common/autotest_common.sh@640 -- # local es=0 00:11:48.352 04:54:18 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:48.352 04:54:18 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:11:48.352 04:54:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:48.352 04:54:18 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:11:48.352 04:54:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:48.352 04:54:18 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:11:48.352 04:54:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:48.352 04:54:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.352 04:54:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.352 04:54:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.352 04:54:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.352 04:54:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.352 04:54:18 -- accel/accel.sh@42 -- # jq -r . 00:11:48.352 -x option must be non-negative. 00:11:48.352 [2024-04-27 04:54:18.181747] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:48.352 accel_perf options: 00:11:48.352 [-h help message] 00:11:48.352 [-q queue depth per core] 00:11:48.352 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:48.352 [-T number of threads per core 00:11:48.352 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:48.352 [-t time in seconds] 00:11:48.352 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:48.352 [ dif_verify, , dif_generate, dif_generate_copy 00:11:48.352 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:48.352 [-l for compress/decompress workloads, name of uncompressed input file 00:11:48.352 [-S for crc32c workload, use this seed value (default 0) 00:11:48.352 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:48.352 [-f for fill workload, use this BYTE value (default 255) 00:11:48.352 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:48.352 [-y verify result if this switch is on] 00:11:48.352 [-a tasks to allocate per core (default: same value as -q)] 00:11:48.352 Can be used to spread operations across a wider range of memory. 00:11:48.352 04:54:18 -- common/autotest_common.sh@643 -- # es=1 00:11:48.352 04:54:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:48.352 04:54:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:48.352 04:54:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:48.352 00:11:48.352 real 0m0.054s 00:11:48.352 user 0m0.035s 00:11:48.352 sys 0m0.020s 00:11:48.352 04:54:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.352 ************************************ 00:11:48.352 END TEST accel_negative_buffers 00:11:48.352 ************************************ 00:11:48.352 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.611 04:54:18 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:48.611 04:54:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:48.611 04:54:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.611 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.611 ************************************ 00:11:48.611 START TEST accel_crc32c 00:11:48.611 ************************************ 00:11:48.611 04:54:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:48.611 04:54:18 -- accel/accel.sh@16 -- # local accel_opc 00:11:48.611 04:54:18 -- accel/accel.sh@17 -- # local accel_module 00:11:48.611 04:54:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:48.611 04:54:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:48.611 04:54:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.611 04:54:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.611 04:54:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.611 04:54:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.611 04:54:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.611 04:54:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.611 04:54:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.611 04:54:18 -- accel/accel.sh@42 -- # jq -r . 00:11:48.611 [2024-04-27 04:54:18.292392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:48.611 [2024-04-27 04:54:18.292820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119120 ] 00:11:48.611 [2024-04-27 04:54:18.467432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.870 [2024-04-27 04:54:18.583935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.769 04:54:20 -- accel/accel.sh@18 -- # out=' 00:11:50.769 SPDK Configuration: 00:11:50.769 Core mask: 0x1 00:11:50.769 00:11:50.769 Accel Perf Configuration: 00:11:50.769 Workload Type: crc32c 00:11:50.769 CRC-32C seed: 32 00:11:50.769 Transfer size: 4096 bytes 00:11:50.769 Vector count 1 00:11:50.769 Module: software 00:11:50.769 Queue depth: 32 00:11:50.769 Allocate depth: 32 00:11:50.769 # threads/core: 1 00:11:50.769 Run time: 1 seconds 00:11:50.769 Verify: Yes 00:11:50.769 00:11:50.769 Running for 1 seconds... 00:11:50.769 00:11:50.769 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:50.769 ------------------------------------------------------------------------------------ 00:11:50.769 0,0 429088/s 1676 MiB/s 0 0 00:11:50.769 ==================================================================================== 00:11:50.769 Total 429088/s 1676 MiB/s 0 0' 00:11:50.769 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:50.769 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:50.769 04:54:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:50.769 04:54:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:50.769 04:54:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.769 04:54:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.769 04:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.769 04:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.769 04:54:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.769 04:54:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.769 04:54:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.769 04:54:20 -- accel/accel.sh@42 -- # jq -r . 00:11:50.769 [2024-04-27 04:54:20.231221] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:50.769 [2024-04-27 04:54:20.231505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119150 ] 00:11:50.769 [2024-04-27 04:54:20.406965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.769 [2024-04-27 04:54:20.540145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val=0x1 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val=crc32c 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val=32 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val=software 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@23 -- # accel_module=software 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.028 04:54:20 -- accel/accel.sh@21 -- # val=32 00:11:51.028 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.028 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val=32 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val=1 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val=Yes 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:51.029 04:54:20 -- accel/accel.sh@21 -- # val= 00:11:51.029 04:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # IFS=: 00:11:51.029 04:54:20 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@21 -- # val= 00:11:52.413 04:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # IFS=: 00:11:52.413 04:54:22 -- accel/accel.sh@20 -- # read -r var val 00:11:52.413 04:54:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:52.413 04:54:22 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:11:52.413 ************************************ 00:11:52.413 END TEST accel_crc32c 00:11:52.413 ************************************ 00:11:52.413 04:54:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.413 00:11:52.413 real 0m3.878s 00:11:52.413 user 0m3.087s 00:11:52.413 sys 0m0.608s 00:11:52.413 04:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.413 04:54:22 -- common/autotest_common.sh@10 -- # set +x 00:11:52.413 04:54:22 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:52.413 04:54:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:52.413 04:54:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.413 04:54:22 -- common/autotest_common.sh@10 -- # set +x 00:11:52.413 ************************************ 00:11:52.413 START TEST accel_crc32c_C2 00:11:52.413 ************************************ 00:11:52.413 04:54:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:52.413 04:54:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:52.413 04:54:22 -- accel/accel.sh@17 -- # local accel_module 00:11:52.413 04:54:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:52.413 04:54:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:52.413 04:54:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:52.413 04:54:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:52.413 04:54:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.413 04:54:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.413 04:54:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:52.413 04:54:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:52.413 04:54:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:52.413 04:54:22 -- accel/accel.sh@42 -- # jq -r . 00:11:52.413 [2024-04-27 04:54:22.223552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:52.413 [2024-04-27 04:54:22.223977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119195 ] 00:11:52.687 [2024-04-27 04:54:22.396216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.687 [2024-04-27 04:54:22.521140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.585 04:54:24 -- accel/accel.sh@18 -- # out=' 00:11:54.585 SPDK Configuration: 00:11:54.585 Core mask: 0x1 00:11:54.585 00:11:54.585 Accel Perf Configuration: 00:11:54.585 Workload Type: crc32c 00:11:54.585 CRC-32C seed: 0 00:11:54.585 Transfer size: 4096 bytes 00:11:54.585 Vector count 2 00:11:54.585 Module: software 00:11:54.585 Queue depth: 32 00:11:54.585 Allocate depth: 32 00:11:54.585 # threads/core: 1 00:11:54.585 Run time: 1 seconds 00:11:54.585 Verify: Yes 00:11:54.585 00:11:54.585 Running for 1 seconds... 00:11:54.585 00:11:54.585 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:54.585 ------------------------------------------------------------------------------------ 00:11:54.585 0,0 338240/s 2642 MiB/s 0 0 00:11:54.585 ==================================================================================== 00:11:54.585 Total 338240/s 1321 MiB/s 0 0' 00:11:54.585 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.585 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.585 04:54:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:54.585 04:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:54.585 04:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.585 04:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:54.585 04:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.585 04:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.585 04:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:54.585 04:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:54.585 04:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:54.585 04:54:24 -- accel/accel.sh@42 -- # jq -r . 00:11:54.585 [2024-04-27 04:54:24.140249] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:54.585 [2024-04-27 04:54:24.140690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119232 ] 00:11:54.585 [2024-04-27 04:54:24.313914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.585 [2024-04-27 04:54:24.458875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=0x1 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=crc32c 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=0 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=software 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@23 -- # accel_module=software 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=32 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=32 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=1 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val=Yes 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 04:54:24 -- accel/accel.sh@21 -- # val= 00:11:54.844 04:54:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 04:54:24 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 04:54:26 -- accel/accel.sh@21 -- # val= 00:11:56.218 04:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # IFS=: 00:11:56.218 04:54:26 -- accel/accel.sh@20 -- # read -r var val 00:11:56.218 ************************************ 00:11:56.218 END TEST accel_crc32c_C2 00:11:56.218 ************************************ 00:11:56.218 04:54:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:56.218 04:54:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:11:56.218 04:54:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:56.218 00:11:56.218 real 0m3.895s 00:11:56.218 user 0m3.120s 00:11:56.218 sys 0m0.596s 00:11:56.218 04:54:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.218 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 04:54:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:56.476 04:54:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:56.476 04:54:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.476 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 ************************************ 00:11:56.476 START TEST accel_copy 00:11:56.476 ************************************ 00:11:56.476 04:54:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:11:56.476 04:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:11:56.476 04:54:26 -- accel/accel.sh@17 -- # local accel_module 00:11:56.476 04:54:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:11:56.476 04:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:56.476 04:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:56.476 04:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:56.477 04:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.477 04:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.477 04:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:56.477 04:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:56.477 04:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:56.477 04:54:26 -- accel/accel.sh@42 -- # jq -r . 00:11:56.477 [2024-04-27 04:54:26.166794] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:56.477 [2024-04-27 04:54:26.167137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119277 ] 00:11:56.477 [2024-04-27 04:54:26.327266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.735 [2024-04-27 04:54:26.451190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.634 04:54:28 -- accel/accel.sh@18 -- # out=' 00:11:58.634 SPDK Configuration: 00:11:58.634 Core mask: 0x1 00:11:58.634 00:11:58.634 Accel Perf Configuration: 00:11:58.634 Workload Type: copy 00:11:58.634 Transfer size: 4096 bytes 00:11:58.634 Vector count 1 00:11:58.634 Module: software 00:11:58.634 Queue depth: 32 00:11:58.634 Allocate depth: 32 00:11:58.634 # threads/core: 1 00:11:58.634 Run time: 1 seconds 00:11:58.634 Verify: Yes 00:11:58.634 00:11:58.634 Running for 1 seconds... 00:11:58.634 00:11:58.634 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:58.634 ------------------------------------------------------------------------------------ 00:11:58.634 0,0 268832/s 1050 MiB/s 0 0 00:11:58.634 ==================================================================================== 00:11:58.634 Total 268832/s 1050 MiB/s 0 0' 00:11:58.634 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.634 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.634 04:54:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:58.634 04:54:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:58.634 04:54:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:58.634 04:54:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:58.634 04:54:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.634 04:54:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.634 04:54:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:58.634 04:54:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:58.635 04:54:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:58.635 04:54:28 -- accel/accel.sh@42 -- # jq -r . 00:11:58.635 [2024-04-27 04:54:28.084496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:58.635 [2024-04-27 04:54:28.085007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119306 ] 00:11:58.635 [2024-04-27 04:54:28.254982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.635 [2024-04-27 04:54:28.395007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=0x1 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=copy 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=software 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@23 -- # accel_module=software 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=32 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=32 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val=1 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.893 04:54:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:58.893 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.893 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.894 04:54:28 -- accel/accel.sh@21 -- # val=Yes 00:11:58.894 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.894 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.894 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:11:58.894 04:54:28 -- accel/accel.sh@21 -- # val= 00:11:58.894 04:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # IFS=: 00:11:58.894 04:54:28 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 04:54:29 -- accel/accel.sh@21 -- # val= 00:12:00.266 04:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # IFS=: 00:12:00.266 04:54:29 -- accel/accel.sh@20 -- # read -r var val 00:12:00.266 ************************************ 00:12:00.266 END TEST accel_copy 00:12:00.266 ************************************ 00:12:00.266 04:54:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:00.266 04:54:29 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:12:00.266 04:54:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:00.266 00:12:00.266 real 0m3.862s 00:12:00.266 user 0m3.092s 00:12:00.266 sys 0m0.576s 00:12:00.266 04:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.266 04:54:29 -- common/autotest_common.sh@10 -- # set +x 00:12:00.266 04:54:30 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:00.266 04:54:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:00.266 04:54:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.266 04:54:30 -- common/autotest_common.sh@10 -- # set +x 00:12:00.266 ************************************ 00:12:00.266 START TEST accel_fill 00:12:00.266 ************************************ 00:12:00.266 04:54:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:00.266 04:54:30 -- accel/accel.sh@16 -- # local accel_opc 00:12:00.266 04:54:30 -- accel/accel.sh@17 -- # local accel_module 00:12:00.266 04:54:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:00.266 04:54:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:00.266 04:54:30 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.266 04:54:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.266 04:54:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.266 04:54:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.266 04:54:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.266 04:54:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.266 04:54:30 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.266 04:54:30 -- accel/accel.sh@42 -- # jq -r . 00:12:00.266 [2024-04-27 04:54:30.089103] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:00.266 [2024-04-27 04:54:30.089926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119346 ] 00:12:00.524 [2024-04-27 04:54:30.278124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.524 [2024-04-27 04:54:30.390636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.423 04:54:31 -- accel/accel.sh@18 -- # out=' 00:12:02.423 SPDK Configuration: 00:12:02.423 Core mask: 0x1 00:12:02.423 00:12:02.423 Accel Perf Configuration: 00:12:02.423 Workload Type: fill 00:12:02.423 Fill pattern: 0x80 00:12:02.423 Transfer size: 4096 bytes 00:12:02.423 Vector count 1 00:12:02.423 Module: software 00:12:02.423 Queue depth: 64 00:12:02.423 Allocate depth: 64 00:12:02.423 # threads/core: 1 00:12:02.423 Run time: 1 seconds 00:12:02.423 Verify: Yes 00:12:02.423 00:12:02.423 Running for 1 seconds... 00:12:02.423 00:12:02.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:02.423 ------------------------------------------------------------------------------------ 00:12:02.423 0,0 406912/s 1589 MiB/s 0 0 00:12:02.423 ==================================================================================== 00:12:02.423 Total 406912/s 1589 MiB/s 0 0' 00:12:02.423 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.423 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.423 04:54:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:02.423 04:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:02.423 04:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.423 04:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:02.423 04:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.423 04:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.423 04:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:02.423 04:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:02.423 04:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:12:02.423 04:54:32 -- accel/accel.sh@42 -- # jq -r . 00:12:02.423 [2024-04-27 04:54:32.039669] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:02.423 [2024-04-27 04:54:32.040125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119381 ] 00:12:02.423 [2024-04-27 04:54:32.212227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.683 [2024-04-27 04:54:32.378434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=0x1 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=fill 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@24 -- # accel_opc=fill 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=0x80 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=software 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@23 -- # accel_module=software 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=64 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=64 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=1 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val=Yes 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:02.683 04:54:32 -- accel/accel.sh@21 -- # val= 00:12:02.683 04:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # IFS=: 00:12:02.683 04:54:32 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.055 04:54:33 -- accel/accel.sh@21 -- # val= 00:12:04.055 04:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # IFS=: 00:12:04.055 04:54:33 -- accel/accel.sh@20 -- # read -r var val 00:12:04.313 04:54:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:04.313 04:54:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:12:04.313 04:54:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:04.313 00:12:04.313 real 0m3.905s 00:12:04.313 user 0m3.129s 00:12:04.313 sys 0m0.601s 00:12:04.313 04:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.313 ************************************ 00:12:04.313 END TEST accel_fill 00:12:04.313 ************************************ 00:12:04.313 04:54:33 -- common/autotest_common.sh@10 -- # set +x 00:12:04.313 04:54:33 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:04.313 04:54:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:04.313 04:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:04.313 04:54:33 -- common/autotest_common.sh@10 -- # set +x 00:12:04.313 ************************************ 00:12:04.313 START TEST accel_copy_crc32c 00:12:04.313 ************************************ 00:12:04.313 04:54:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:12:04.313 04:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:12:04.313 04:54:34 -- accel/accel.sh@17 -- # local accel_module 00:12:04.313 04:54:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:04.313 04:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:04.313 04:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:12:04.313 04:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:04.313 04:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.313 04:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.313 04:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:04.313 04:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:04.313 04:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:12:04.313 04:54:34 -- accel/accel.sh@42 -- # jq -r . 00:12:04.313 [2024-04-27 04:54:34.052653] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:04.313 [2024-04-27 04:54:34.053066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119426 ] 00:12:04.571 [2024-04-27 04:54:34.226802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.571 [2024-04-27 04:54:34.370137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.495 04:54:35 -- accel/accel.sh@18 -- # out=' 00:12:06.495 SPDK Configuration: 00:12:06.495 Core mask: 0x1 00:12:06.495 00:12:06.495 Accel Perf Configuration: 00:12:06.495 Workload Type: copy_crc32c 00:12:06.495 CRC-32C seed: 0 00:12:06.495 Vector size: 4096 bytes 00:12:06.495 Transfer size: 4096 bytes 00:12:06.495 Vector count 1 00:12:06.495 Module: software 00:12:06.495 Queue depth: 32 00:12:06.495 Allocate depth: 32 00:12:06.495 # threads/core: 1 00:12:06.495 Run time: 1 seconds 00:12:06.495 Verify: Yes 00:12:06.495 00:12:06.495 Running for 1 seconds... 00:12:06.495 00:12:06.495 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:06.495 ------------------------------------------------------------------------------------ 00:12:06.495 0,0 219360/s 856 MiB/s 0 0 00:12:06.495 ==================================================================================== 00:12:06.495 Total 219360/s 856 MiB/s 0 0' 00:12:06.495 04:54:35 -- accel/accel.sh@20 -- # IFS=: 00:12:06.495 04:54:35 -- accel/accel.sh@20 -- # read -r var val 00:12:06.495 04:54:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:06.495 04:54:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:06.495 04:54:35 -- accel/accel.sh@12 -- # build_accel_config 00:12:06.495 04:54:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:06.495 04:54:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.495 04:54:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.495 04:54:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:06.495 04:54:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:06.495 04:54:35 -- accel/accel.sh@41 -- # local IFS=, 00:12:06.495 04:54:35 -- accel/accel.sh@42 -- # jq -r . 00:12:06.495 [2024-04-27 04:54:36.017674] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:06.495 [2024-04-27 04:54:36.018096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119464 ] 00:12:06.495 [2024-04-27 04:54:36.190172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.495 [2024-04-27 04:54:36.344517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=0x1 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=0 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=software 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@23 -- # accel_module=software 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=32 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=32 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val=1 00:12:06.754 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.754 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.754 04:54:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:06.755 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.755 04:54:36 -- accel/accel.sh@21 -- # val=Yes 00:12:06.755 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.755 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.755 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:06.755 04:54:36 -- accel/accel.sh@21 -- # val= 00:12:06.755 04:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # IFS=: 00:12:06.755 04:54:36 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 04:54:37 -- accel/accel.sh@21 -- # val= 00:12:08.130 04:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # IFS=: 00:12:08.130 04:54:37 -- accel/accel.sh@20 -- # read -r var val 00:12:08.130 ************************************ 00:12:08.130 END TEST accel_copy_crc32c 00:12:08.130 ************************************ 00:12:08.130 04:54:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:08.130 04:54:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:12:08.130 04:54:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:08.130 00:12:08.130 real 0m3.956s 00:12:08.130 user 0m3.151s 00:12:08.130 sys 0m0.627s 00:12:08.130 04:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.130 04:54:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.130 04:54:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:08.130 04:54:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:12:08.130 04:54:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:08.130 04:54:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.388 ************************************ 00:12:08.388 START TEST accel_copy_crc32c_C2 00:12:08.388 ************************************ 00:12:08.388 04:54:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:08.388 04:54:38 -- accel/accel.sh@16 -- # local accel_opc 00:12:08.388 04:54:38 -- accel/accel.sh@17 -- # local accel_module 00:12:08.388 04:54:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:08.388 04:54:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:08.388 04:54:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:08.388 04:54:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:08.388 04:54:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.388 04:54:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.388 04:54:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:08.388 04:54:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:08.388 04:54:38 -- accel/accel.sh@41 -- # local IFS=, 00:12:08.388 04:54:38 -- accel/accel.sh@42 -- # jq -r . 00:12:08.388 [2024-04-27 04:54:38.053741] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:08.389 [2024-04-27 04:54:38.054168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119509 ] 00:12:08.389 [2024-04-27 04:54:38.226813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.647 [2024-04-27 04:54:38.357117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.549 04:54:39 -- accel/accel.sh@18 -- # out=' 00:12:10.549 SPDK Configuration: 00:12:10.549 Core mask: 0x1 00:12:10.549 00:12:10.549 Accel Perf Configuration: 00:12:10.549 Workload Type: copy_crc32c 00:12:10.549 CRC-32C seed: 0 00:12:10.549 Vector size: 4096 bytes 00:12:10.549 Transfer size: 8192 bytes 00:12:10.549 Vector count 2 00:12:10.549 Module: software 00:12:10.549 Queue depth: 32 00:12:10.549 Allocate depth: 32 00:12:10.549 # threads/core: 1 00:12:10.549 Run time: 1 seconds 00:12:10.549 Verify: Yes 00:12:10.549 00:12:10.549 Running for 1 seconds... 00:12:10.549 00:12:10.550 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:10.550 ------------------------------------------------------------------------------------ 00:12:10.550 0,0 155648/s 1216 MiB/s 0 0 00:12:10.550 ==================================================================================== 00:12:10.550 Total 155648/s 608 MiB/s 0 0' 00:12:10.550 04:54:39 -- accel/accel.sh@20 -- # IFS=: 00:12:10.550 04:54:39 -- accel/accel.sh@20 -- # read -r var val 00:12:10.550 04:54:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:10.550 04:54:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:10.550 04:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:12:10.550 04:54:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:10.550 04:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.550 04:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.550 04:54:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:10.550 04:54:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:10.550 04:54:39 -- accel/accel.sh@41 -- # local IFS=, 00:12:10.550 04:54:39 -- accel/accel.sh@42 -- # jq -r . 00:12:10.550 [2024-04-27 04:54:39.993617] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:10.550 [2024-04-27 04:54:39.994677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119539 ] 00:12:10.550 [2024-04-27 04:54:40.181883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.550 [2024-04-27 04:54:40.331011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=0x1 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=0 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=software 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@23 -- # accel_module=software 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=32 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=32 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=1 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val=Yes 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:10.808 04:54:40 -- accel/accel.sh@21 -- # val= 00:12:10.808 04:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # IFS=: 00:12:10.808 04:54:40 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 04:54:41 -- accel/accel.sh@21 -- # val= 00:12:12.184 04:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # IFS=: 00:12:12.184 04:54:41 -- accel/accel.sh@20 -- # read -r var val 00:12:12.184 ************************************ 00:12:12.184 END TEST accel_copy_crc32c_C2 00:12:12.184 ************************************ 00:12:12.184 04:54:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:12.184 04:54:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:12:12.184 04:54:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.184 00:12:12.184 real 0m3.904s 00:12:12.184 user 0m3.114s 00:12:12.184 sys 0m0.610s 00:12:12.184 04:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.184 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:12.184 04:54:41 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:12.184 04:54:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:12.184 04:54:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.184 04:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:12.184 ************************************ 00:12:12.184 START TEST accel_dualcast 00:12:12.184 ************************************ 00:12:12.184 04:54:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:12:12.184 04:54:41 -- accel/accel.sh@16 -- # local accel_opc 00:12:12.184 04:54:41 -- accel/accel.sh@17 -- # local accel_module 00:12:12.184 04:54:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:12:12.184 04:54:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:12.184 04:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.184 04:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:12.184 04:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.184 04:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.184 04:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:12.184 04:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:12.184 04:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:12:12.184 04:54:41 -- accel/accel.sh@42 -- # jq -r . 00:12:12.184 [2024-04-27 04:54:42.015865] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:12.184 [2024-04-27 04:54:42.016376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119584 ] 00:12:12.451 [2024-04-27 04:54:42.190174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.451 [2024-04-27 04:54:42.287718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.825 04:54:43 -- accel/accel.sh@18 -- # out=' 00:12:13.825 SPDK Configuration: 00:12:13.825 Core mask: 0x1 00:12:13.825 00:12:13.825 Accel Perf Configuration: 00:12:13.825 Workload Type: dualcast 00:12:13.826 Transfer size: 4096 bytes 00:12:13.826 Vector count 1 00:12:13.826 Module: software 00:12:13.826 Queue depth: 32 00:12:13.826 Allocate depth: 32 00:12:13.826 # threads/core: 1 00:12:13.826 Run time: 1 seconds 00:12:13.826 Verify: Yes 00:12:13.826 00:12:13.826 Running for 1 seconds... 00:12:13.826 00:12:13.826 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:13.826 ------------------------------------------------------------------------------------ 00:12:13.826 0,0 296096/s 1156 MiB/s 0 0 00:12:13.826 ==================================================================================== 00:12:13.826 Total 296096/s 1156 MiB/s 0 0' 00:12:13.826 04:54:43 -- accel/accel.sh@20 -- # IFS=: 00:12:13.826 04:54:43 -- accel/accel.sh@20 -- # read -r var val 00:12:13.826 04:54:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:13.826 04:54:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:13.826 04:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:12:13.826 04:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:13.826 04:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:13.826 04:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:13.826 04:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:13.826 04:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:13.826 04:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:12:13.826 04:54:43 -- accel/accel.sh@42 -- # jq -r . 00:12:13.826 [2024-04-27 04:54:43.699587] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:13.826 [2024-04-27 04:54:43.700836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119619 ] 00:12:14.085 [2024-04-27 04:54:43.876993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.351 [2024-04-27 04:54:43.996465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.351 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=0x1 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=dualcast 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=software 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@23 -- # accel_module=software 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=32 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=32 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=1 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val=Yes 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:14.352 04:54:44 -- accel/accel.sh@21 -- # val= 00:12:14.352 04:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # IFS=: 00:12:14.352 04:54:44 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 04:54:45 -- accel/accel.sh@21 -- # val= 00:12:15.732 04:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # IFS=: 00:12:15.732 04:54:45 -- accel/accel.sh@20 -- # read -r var val 00:12:15.732 ************************************ 00:12:15.732 END TEST accel_dualcast 00:12:15.732 ************************************ 00:12:15.732 04:54:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:15.732 04:54:45 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:12:15.732 04:54:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:15.732 00:12:15.732 real 0m3.454s 00:12:15.732 user 0m2.823s 00:12:15.732 sys 0m0.442s 00:12:15.732 04:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.732 04:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.732 04:54:45 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:15.732 04:54:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:15.732 04:54:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:15.732 04:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.732 ************************************ 00:12:15.732 START TEST accel_compare 00:12:15.732 ************************************ 00:12:15.732 04:54:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:12:15.732 04:54:45 -- accel/accel.sh@16 -- # local accel_opc 00:12:15.732 04:54:45 -- accel/accel.sh@17 -- # local accel_module 00:12:15.732 04:54:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:12:15.732 04:54:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:15.732 04:54:45 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.732 04:54:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:15.732 04:54:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.732 04:54:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.732 04:54:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:15.732 04:54:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:15.732 04:54:45 -- accel/accel.sh@41 -- # local IFS=, 00:12:15.732 04:54:45 -- accel/accel.sh@42 -- # jq -r . 00:12:15.732 [2024-04-27 04:54:45.527156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:15.732 [2024-04-27 04:54:45.527451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119660 ] 00:12:15.991 [2024-04-27 04:54:45.699070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.991 [2024-04-27 04:54:45.827239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.368 04:54:47 -- accel/accel.sh@18 -- # out=' 00:12:17.368 SPDK Configuration: 00:12:17.368 Core mask: 0x1 00:12:17.368 00:12:17.368 Accel Perf Configuration: 00:12:17.368 Workload Type: compare 00:12:17.368 Transfer size: 4096 bytes 00:12:17.368 Vector count 1 00:12:17.368 Module: software 00:12:17.368 Queue depth: 32 00:12:17.368 Allocate depth: 32 00:12:17.368 # threads/core: 1 00:12:17.368 Run time: 1 seconds 00:12:17.368 Verify: Yes 00:12:17.368 00:12:17.368 Running for 1 seconds... 00:12:17.368 00:12:17.368 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:17.368 ------------------------------------------------------------------------------------ 00:12:17.368 0,0 407680/s 1592 MiB/s 0 0 00:12:17.368 ==================================================================================== 00:12:17.368 Total 407680/s 1592 MiB/s 0 0' 00:12:17.368 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.368 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.368 04:54:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:17.368 04:54:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:17.368 04:54:47 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.368 04:54:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:17.368 04:54:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.368 04:54:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.368 04:54:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:17.368 04:54:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:17.368 04:54:47 -- accel/accel.sh@41 -- # local IFS=, 00:12:17.368 04:54:47 -- accel/accel.sh@42 -- # jq -r . 00:12:17.627 [2024-04-27 04:54:47.267056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:17.627 [2024-04-27 04:54:47.267959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119689 ] 00:12:17.627 [2024-04-27 04:54:47.438310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.886 [2024-04-27 04:54:47.585972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=0x1 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=compare 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=software 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@23 -- # accel_module=software 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=32 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=32 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=1 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val=Yes 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:17.886 04:54:47 -- accel/accel.sh@21 -- # val= 00:12:17.886 04:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # IFS=: 00:12:17.886 04:54:47 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@21 -- # val= 00:12:19.263 04:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # IFS=: 00:12:19.263 04:54:49 -- accel/accel.sh@20 -- # read -r var val 00:12:19.263 04:54:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:19.263 04:54:49 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:12:19.263 04:54:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:19.263 00:12:19.263 real 0m3.542s 00:12:19.263 user 0m2.841s 00:12:19.263 sys 0m0.524s 00:12:19.263 04:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.263 ************************************ 00:12:19.263 END TEST accel_compare 00:12:19.263 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.263 ************************************ 00:12:19.263 04:54:49 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:19.263 04:54:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:19.263 04:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:19.263 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.263 ************************************ 00:12:19.263 START TEST accel_xor 00:12:19.263 ************************************ 00:12:19.263 04:54:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:12:19.263 04:54:49 -- accel/accel.sh@16 -- # local accel_opc 00:12:19.263 04:54:49 -- accel/accel.sh@17 -- # local accel_module 00:12:19.263 04:54:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:12:19.263 04:54:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:19.263 04:54:49 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.263 04:54:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:19.263 04:54:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.263 04:54:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.263 04:54:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:19.263 04:54:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:19.263 04:54:49 -- accel/accel.sh@41 -- # local IFS=, 00:12:19.263 04:54:49 -- accel/accel.sh@42 -- # jq -r . 00:12:19.263 [2024-04-27 04:54:49.128623] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:19.263 [2024-04-27 04:54:49.128868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119734 ] 00:12:19.532 [2024-04-27 04:54:49.290405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.532 [2024-04-27 04:54:49.414667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.448 04:54:50 -- accel/accel.sh@18 -- # out=' 00:12:21.448 SPDK Configuration: 00:12:21.448 Core mask: 0x1 00:12:21.448 00:12:21.448 Accel Perf Configuration: 00:12:21.448 Workload Type: xor 00:12:21.448 Source buffers: 2 00:12:21.448 Transfer size: 4096 bytes 00:12:21.448 Vector count 1 00:12:21.448 Module: software 00:12:21.448 Queue depth: 32 00:12:21.448 Allocate depth: 32 00:12:21.448 # threads/core: 1 00:12:21.448 Run time: 1 seconds 00:12:21.448 Verify: Yes 00:12:21.448 00:12:21.448 Running for 1 seconds... 00:12:21.448 00:12:21.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:21.448 ------------------------------------------------------------------------------------ 00:12:21.448 0,0 176960/s 691 MiB/s 0 0 00:12:21.448 ==================================================================================== 00:12:21.448 Total 176960/s 691 MiB/s 0 0' 00:12:21.448 04:54:50 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:50 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:21.448 04:54:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:21.448 04:54:50 -- accel/accel.sh@12 -- # build_accel_config 00:12:21.448 04:54:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:21.448 04:54:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.448 04:54:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.448 04:54:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:21.448 04:54:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:21.448 04:54:50 -- accel/accel.sh@41 -- # local IFS=, 00:12:21.448 04:54:50 -- accel/accel.sh@42 -- # jq -r . 00:12:21.448 [2024-04-27 04:54:50.862570] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:21.448 [2024-04-27 04:54:50.863456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119764 ] 00:12:21.448 [2024-04-27 04:54:51.035115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.448 [2024-04-27 04:54:51.186331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=0x1 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=xor 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=2 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=software 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@23 -- # accel_module=software 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=32 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=32 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.448 04:54:51 -- accel/accel.sh@21 -- # val=1 00:12:21.448 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.448 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.449 04:54:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:21.449 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.449 04:54:51 -- accel/accel.sh@21 -- # val=Yes 00:12:21.449 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.449 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.449 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:21.449 04:54:51 -- accel/accel.sh@21 -- # val= 00:12:21.449 04:54:51 -- accel/accel.sh@22 -- # case "$var" in 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # IFS=: 00:12:21.449 04:54:51 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@21 -- # val= 00:12:23.352 04:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # IFS=: 00:12:23.352 04:54:52 -- accel/accel.sh@20 -- # read -r var val 00:12:23.352 04:54:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:23.352 04:54:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:12:23.352 04:54:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:23.352 00:12:23.352 real 0m3.839s 00:12:23.352 user 0m3.219s 00:12:23.352 sys 0m0.437s 00:12:23.352 04:54:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.352 ************************************ 00:12:23.352 04:54:52 -- common/autotest_common.sh@10 -- # set +x 00:12:23.352 END TEST accel_xor 00:12:23.352 ************************************ 00:12:23.352 04:54:52 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:23.352 04:54:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:12:23.352 04:54:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.352 04:54:52 -- common/autotest_common.sh@10 -- # set +x 00:12:23.352 ************************************ 00:12:23.352 START TEST accel_xor 00:12:23.352 ************************************ 00:12:23.352 04:54:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:12:23.352 04:54:52 -- accel/accel.sh@16 -- # local accel_opc 00:12:23.352 04:54:52 -- accel/accel.sh@17 -- # local accel_module 00:12:23.352 04:54:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:12:23.352 04:54:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:23.352 04:54:52 -- accel/accel.sh@12 -- # build_accel_config 00:12:23.352 04:54:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:23.352 04:54:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.352 04:54:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:23.352 04:54:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:23.352 04:54:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:23.352 04:54:52 -- accel/accel.sh@41 -- # local IFS=, 00:12:23.352 04:54:52 -- accel/accel.sh@42 -- # jq -r . 00:12:23.352 [2024-04-27 04:54:53.027908] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:23.352 [2024-04-27 04:54:53.028224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119809 ] 00:12:23.352 [2024-04-27 04:54:53.207117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.610 [2024-04-27 04:54:53.445601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.513 04:54:55 -- accel/accel.sh@18 -- # out=' 00:12:25.513 SPDK Configuration: 00:12:25.513 Core mask: 0x1 00:12:25.513 00:12:25.513 Accel Perf Configuration: 00:12:25.513 Workload Type: xor 00:12:25.513 Source buffers: 3 00:12:25.513 Transfer size: 4096 bytes 00:12:25.513 Vector count 1 00:12:25.513 Module: software 00:12:25.513 Queue depth: 32 00:12:25.513 Allocate depth: 32 00:12:25.513 # threads/core: 1 00:12:25.513 Run time: 1 seconds 00:12:25.513 Verify: Yes 00:12:25.513 00:12:25.513 Running for 1 seconds... 00:12:25.513 00:12:25.513 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:25.513 ------------------------------------------------------------------------------------ 00:12:25.513 0,0 105984/s 414 MiB/s 0 0 00:12:25.513 ==================================================================================== 00:12:25.513 Total 105984/s 414 MiB/s 0 0' 00:12:25.513 04:54:55 -- accel/accel.sh@20 -- # IFS=: 00:12:25.513 04:54:55 -- accel/accel.sh@20 -- # read -r var val 00:12:25.513 04:54:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:25.513 04:54:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:25.513 04:54:55 -- accel/accel.sh@12 -- # build_accel_config 00:12:25.513 04:54:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:25.513 04:54:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.513 04:54:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:25.513 04:54:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:25.513 04:54:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:25.513 04:54:55 -- accel/accel.sh@41 -- # local IFS=, 00:12:25.513 04:54:55 -- accel/accel.sh@42 -- # jq -r . 00:12:25.513 [2024-04-27 04:54:55.379094] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:25.513 [2024-04-27 04:54:55.380016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119845 ] 00:12:25.772 [2024-04-27 04:54:55.549476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.030 [2024-04-27 04:54:55.807945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=0x1 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=xor 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=3 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=software 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@23 -- # accel_module=software 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=32 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=32 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=1 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val=Yes 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:26.288 04:54:56 -- accel/accel.sh@21 -- # val= 00:12:26.288 04:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # IFS=: 00:12:26.288 04:54:56 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@21 -- # val= 00:12:28.229 04:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # IFS=: 00:12:28.229 04:54:57 -- accel/accel.sh@20 -- # read -r var val 00:12:28.229 04:54:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:28.229 04:54:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:12:28.229 04:54:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:28.229 00:12:28.229 real 0m4.984s 00:12:28.229 user 0m3.897s 00:12:28.229 sys 0m0.914s 00:12:28.229 ************************************ 00:12:28.229 END TEST accel_xor 00:12:28.229 ************************************ 00:12:28.229 04:54:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.229 04:54:57 -- common/autotest_common.sh@10 -- # set +x 00:12:28.229 04:54:58 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:28.229 04:54:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:12:28.229 04:54:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.229 04:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.229 ************************************ 00:12:28.229 START TEST accel_dif_verify 00:12:28.229 ************************************ 00:12:28.229 04:54:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:12:28.229 04:54:58 -- accel/accel.sh@16 -- # local accel_opc 00:12:28.229 04:54:58 -- accel/accel.sh@17 -- # local accel_module 00:12:28.229 04:54:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:12:28.229 04:54:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:28.229 04:54:58 -- accel/accel.sh@12 -- # build_accel_config 00:12:28.229 04:54:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:28.229 04:54:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.229 04:54:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.229 04:54:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:28.229 04:54:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:28.229 04:54:58 -- accel/accel.sh@41 -- # local IFS=, 00:12:28.229 04:54:58 -- accel/accel.sh@42 -- # jq -r . 00:12:28.229 [2024-04-27 04:54:58.069733] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:28.229 [2024-04-27 04:54:58.070791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119902 ] 00:12:28.487 [2024-04-27 04:54:58.242731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.745 [2024-04-27 04:54:58.519883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.275 04:55:00 -- accel/accel.sh@18 -- # out=' 00:12:31.275 SPDK Configuration: 00:12:31.275 Core mask: 0x1 00:12:31.275 00:12:31.275 Accel Perf Configuration: 00:12:31.275 Workload Type: dif_verify 00:12:31.275 Vector size: 4096 bytes 00:12:31.275 Transfer size: 4096 bytes 00:12:31.275 Block size: 512 bytes 00:12:31.275 Metadata size: 8 bytes 00:12:31.275 Vector count 1 00:12:31.275 Module: software 00:12:31.275 Queue depth: 32 00:12:31.275 Allocate depth: 32 00:12:31.275 # threads/core: 1 00:12:31.275 Run time: 1 seconds 00:12:31.275 Verify: No 00:12:31.275 00:12:31.275 Running for 1 seconds... 00:12:31.275 00:12:31.275 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:31.275 ------------------------------------------------------------------------------------ 00:12:31.275 0,0 102176/s 405 MiB/s 0 0 00:12:31.275 ==================================================================================== 00:12:31.275 Total 102176/s 399 MiB/s 0 0' 00:12:31.275 04:55:00 -- accel/accel.sh@20 -- # IFS=: 00:12:31.275 04:55:00 -- accel/accel.sh@20 -- # read -r var val 00:12:31.275 04:55:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:31.275 04:55:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:31.275 04:55:00 -- accel/accel.sh@12 -- # build_accel_config 00:12:31.275 04:55:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:31.275 04:55:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.275 04:55:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.275 04:55:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:31.275 04:55:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:31.275 04:55:00 -- accel/accel.sh@41 -- # local IFS=, 00:12:31.276 04:55:00 -- accel/accel.sh@42 -- # jq -r . 00:12:31.276 [2024-04-27 04:55:00.701767] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:31.276 [2024-04-27 04:55:00.702118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119937 ] 00:12:31.276 [2024-04-27 04:55:00.885596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.534 [2024-04-27 04:55:01.205279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val=0x1 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val=dif_verify 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.793 04:55:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:31.793 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.793 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val=software 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@23 -- # accel_module=software 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val=32 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val=32 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val=1 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val=No 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:31.794 04:55:01 -- accel/accel.sh@21 -- # val= 00:12:31.794 04:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # IFS=: 00:12:31.794 04:55:01 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@21 -- # val= 00:12:33.697 04:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # IFS=: 00:12:33.697 04:55:03 -- accel/accel.sh@20 -- # read -r var val 00:12:33.697 04:55:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:33.697 04:55:03 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:12:33.697 04:55:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.697 00:12:33.697 real 0m5.066s 00:12:33.697 user 0m3.838s 00:12:33.697 sys 0m1.048s 00:12:33.697 04:55:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.697 ************************************ 00:12:33.697 END TEST accel_dif_verify 00:12:33.697 ************************************ 00:12:33.697 04:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.697 04:55:03 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:33.697 04:55:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:12:33.697 04:55:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.697 04:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.697 ************************************ 00:12:33.697 START TEST accel_dif_generate 00:12:33.697 ************************************ 00:12:33.697 04:55:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:12:33.697 04:55:03 -- accel/accel.sh@16 -- # local accel_opc 00:12:33.697 04:55:03 -- accel/accel.sh@17 -- # local accel_module 00:12:33.697 04:55:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:12:33.697 04:55:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:33.697 04:55:03 -- accel/accel.sh@12 -- # build_accel_config 00:12:33.697 04:55:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:33.697 04:55:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.697 04:55:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.697 04:55:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:33.697 04:55:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:33.697 04:55:03 -- accel/accel.sh@41 -- # local IFS=, 00:12:33.697 04:55:03 -- accel/accel.sh@42 -- # jq -r . 00:12:33.697 [2024-04-27 04:55:03.193678] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:33.697 [2024-04-27 04:55:03.194040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119995 ] 00:12:33.697 [2024-04-27 04:55:03.378814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.956 [2024-04-27 04:55:03.650531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.875 04:55:05 -- accel/accel.sh@18 -- # out=' 00:12:35.875 SPDK Configuration: 00:12:35.875 Core mask: 0x1 00:12:35.875 00:12:35.875 Accel Perf Configuration: 00:12:35.875 Workload Type: dif_generate 00:12:35.875 Vector size: 4096 bytes 00:12:35.875 Transfer size: 4096 bytes 00:12:35.875 Block size: 512 bytes 00:12:35.875 Metadata size: 8 bytes 00:12:35.875 Vector count 1 00:12:35.875 Module: software 00:12:35.875 Queue depth: 32 00:12:35.875 Allocate depth: 32 00:12:35.875 # threads/core: 1 00:12:35.875 Run time: 1 seconds 00:12:35.875 Verify: No 00:12:35.875 00:12:35.875 Running for 1 seconds... 00:12:35.875 00:12:35.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:35.875 ------------------------------------------------------------------------------------ 00:12:35.875 0,0 120256/s 477 MiB/s 0 0 00:12:35.875 ==================================================================================== 00:12:35.875 Total 120256/s 469 MiB/s 0 0' 00:12:35.875 04:55:05 -- accel/accel.sh@20 -- # IFS=: 00:12:35.875 04:55:05 -- accel/accel.sh@20 -- # read -r var val 00:12:35.875 04:55:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:35.875 04:55:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:35.875 04:55:05 -- accel/accel.sh@12 -- # build_accel_config 00:12:35.875 04:55:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:35.875 04:55:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.875 04:55:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.875 04:55:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:35.875 04:55:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:35.875 04:55:05 -- accel/accel.sh@41 -- # local IFS=, 00:12:35.875 04:55:05 -- accel/accel.sh@42 -- # jq -r . 00:12:35.875 [2024-04-27 04:55:05.660741] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:35.875 [2024-04-27 04:55:05.661089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120031 ] 00:12:36.134 [2024-04-27 04:55:05.840444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.134 [2024-04-27 04:55:06.006624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val=0x1 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val=dif_generate 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.393 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.393 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.393 04:55:06 -- accel/accel.sh@21 -- # val=software 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@23 -- # accel_module=software 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val=32 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val=32 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val=1 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val=No 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:36.394 04:55:06 -- accel/accel.sh@21 -- # val= 00:12:36.394 04:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # IFS=: 00:12:36.394 04:55:06 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@21 -- # val= 00:12:37.771 04:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # IFS=: 00:12:37.771 04:55:07 -- accel/accel.sh@20 -- # read -r var val 00:12:37.771 04:55:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:37.771 04:55:07 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:12:37.771 04:55:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.771 00:12:37.771 real 0m4.417s 00:12:37.771 user 0m3.493s 00:12:37.771 sys 0m0.750s 00:12:37.771 04:55:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.771 ************************************ 00:12:37.771 END TEST accel_dif_generate 00:12:37.771 ************************************ 00:12:37.771 04:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:37.771 04:55:07 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:37.771 04:55:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:12:37.771 04:55:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.771 04:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:37.771 ************************************ 00:12:37.771 START TEST accel_dif_generate_copy 00:12:37.771 ************************************ 00:12:37.771 04:55:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:12:37.771 04:55:07 -- accel/accel.sh@16 -- # local accel_opc 00:12:37.771 04:55:07 -- accel/accel.sh@17 -- # local accel_module 00:12:37.771 04:55:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:12:37.771 04:55:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:37.771 04:55:07 -- accel/accel.sh@12 -- # build_accel_config 00:12:37.771 04:55:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:37.771 04:55:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.771 04:55:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.771 04:55:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:37.771 04:55:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:37.771 04:55:07 -- accel/accel.sh@41 -- # local IFS=, 00:12:37.771 04:55:07 -- accel/accel.sh@42 -- # jq -r . 00:12:37.771 [2024-04-27 04:55:07.653057] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:37.771 [2024-04-27 04:55:07.653819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120077 ] 00:12:38.030 [2024-04-27 04:55:07.815889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.030 [2024-04-27 04:55:07.917302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.933 04:55:09 -- accel/accel.sh@18 -- # out=' 00:12:39.933 SPDK Configuration: 00:12:39.933 Core mask: 0x1 00:12:39.933 00:12:39.933 Accel Perf Configuration: 00:12:39.933 Workload Type: dif_generate_copy 00:12:39.933 Vector size: 4096 bytes 00:12:39.933 Transfer size: 4096 bytes 00:12:39.933 Vector count 1 00:12:39.933 Module: software 00:12:39.933 Queue depth: 32 00:12:39.933 Allocate depth: 32 00:12:39.933 # threads/core: 1 00:12:39.933 Run time: 1 seconds 00:12:39.933 Verify: No 00:12:39.933 00:12:39.933 Running for 1 seconds... 00:12:39.933 00:12:39.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:39.933 ------------------------------------------------------------------------------------ 00:12:39.933 0,0 93600/s 371 MiB/s 0 0 00:12:39.933 ==================================================================================== 00:12:39.933 Total 93600/s 365 MiB/s 0 0' 00:12:39.933 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:39.933 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:39.933 04:55:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:39.933 04:55:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:39.933 04:55:09 -- accel/accel.sh@12 -- # build_accel_config 00:12:39.933 04:55:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:39.933 04:55:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:39.933 04:55:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:39.933 04:55:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:39.933 04:55:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:39.933 04:55:09 -- accel/accel.sh@41 -- # local IFS=, 00:12:39.933 04:55:09 -- accel/accel.sh@42 -- # jq -r . 00:12:39.933 [2024-04-27 04:55:09.479774] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:39.933 [2024-04-27 04:55:09.480249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120106 ] 00:12:39.933 [2024-04-27 04:55:09.653731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.933 [2024-04-27 04:55:09.817583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.191 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.191 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.191 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.191 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.191 04:55:09 -- accel/accel.sh@21 -- # val=0x1 00:12:40.191 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.191 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.191 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.191 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=software 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@23 -- # accel_module=software 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=32 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=32 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=1 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val=No 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:40.192 04:55:09 -- accel/accel.sh@21 -- # val= 00:12:40.192 04:55:09 -- accel/accel.sh@22 -- # case "$var" in 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # IFS=: 00:12:40.192 04:55:09 -- accel/accel.sh@20 -- # read -r var val 00:12:41.566 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.566 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.566 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.566 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.566 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.566 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.566 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.566 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.566 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.566 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.566 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.567 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.567 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.567 04:55:11 -- accel/accel.sh@21 -- # val= 00:12:41.567 04:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.567 04:55:11 -- accel/accel.sh@20 -- # IFS=: 00:12:41.567 04:55:11 -- accel/accel.sh@20 -- # read -r var val 00:12:41.567 04:55:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:41.567 04:55:11 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:12:41.567 04:55:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:41.567 00:12:41.567 real 0m3.705s 00:12:41.567 user 0m3.063s 00:12:41.567 sys 0m0.462s 00:12:41.567 04:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.567 ************************************ 00:12:41.567 END TEST accel_dif_generate_copy 00:12:41.567 ************************************ 00:12:41.567 04:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.567 04:55:11 -- accel/accel.sh@107 -- # [[ y == y ]] 00:12:41.567 04:55:11 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:41.567 04:55:11 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:41.567 04:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:41.567 04:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.567 ************************************ 00:12:41.567 START TEST accel_comp 00:12:41.567 ************************************ 00:12:41.567 04:55:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:41.567 04:55:11 -- accel/accel.sh@16 -- # local accel_opc 00:12:41.567 04:55:11 -- accel/accel.sh@17 -- # local accel_module 00:12:41.567 04:55:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:41.567 04:55:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:41.567 04:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:12:41.567 04:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:41.567 04:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.567 04:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.567 04:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:41.567 04:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:41.567 04:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:12:41.567 04:55:11 -- accel/accel.sh@42 -- # jq -r . 00:12:41.567 [2024-04-27 04:55:11.414158] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:41.567 [2024-04-27 04:55:11.414430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120151 ] 00:12:41.824 [2024-04-27 04:55:11.588466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.082 [2024-04-27 04:55:11.728454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.460 04:55:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:43.460 00:12:43.460 SPDK Configuration: 00:12:43.460 Core mask: 0x1 00:12:43.460 00:12:43.460 Accel Perf Configuration: 00:12:43.460 Workload Type: compress 00:12:43.460 Transfer size: 4096 bytes 00:12:43.460 Vector count 1 00:12:43.460 Module: software 00:12:43.460 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.460 Queue depth: 32 00:12:43.460 Allocate depth: 32 00:12:43.460 # threads/core: 1 00:12:43.460 Run time: 1 seconds 00:12:43.460 Verify: No 00:12:43.460 00:12:43.460 Running for 1 seconds... 00:12:43.460 00:12:43.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:43.460 ------------------------------------------------------------------------------------ 00:12:43.460 0,0 48704/s 203 MiB/s 0 0 00:12:43.460 ==================================================================================== 00:12:43.460 Total 48704/s 190 MiB/s 0 0' 00:12:43.460 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.460 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.460 04:55:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.460 04:55:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.460 04:55:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:43.460 04:55:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:43.460 04:55:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:43.460 04:55:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:43.460 04:55:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:43.460 04:55:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:43.460 04:55:13 -- accel/accel.sh@41 -- # local IFS=, 00:12:43.460 04:55:13 -- accel/accel.sh@42 -- # jq -r . 00:12:43.460 [2024-04-27 04:55:13.207149] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:43.460 [2024-04-27 04:55:13.207476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120185 ] 00:12:43.718 [2024-04-27 04:55:13.380142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.718 [2024-04-27 04:55:13.540294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=0x1 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=compress 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@24 -- # accel_opc=compress 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=software 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@23 -- # accel_module=software 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=32 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=32 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=1 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val=No 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:43.977 04:55:13 -- accel/accel.sh@21 -- # val= 00:12:43.977 04:55:13 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # IFS=: 00:12:43.977 04:55:13 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@21 -- # val= 00:12:45.377 04:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # IFS=: 00:12:45.377 04:55:15 -- accel/accel.sh@20 -- # read -r var val 00:12:45.377 04:55:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:45.377 04:55:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:12:45.377 04:55:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.377 00:12:45.377 real 0m3.673s 00:12:45.377 user 0m3.053s 00:12:45.377 sys 0m0.436s 00:12:45.377 ************************************ 00:12:45.377 END TEST accel_comp 00:12:45.377 04:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.377 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:45.377 ************************************ 00:12:45.377 04:55:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:45.377 04:55:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:12:45.377 04:55:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.377 04:55:15 -- common/autotest_common.sh@10 -- # set +x 00:12:45.377 ************************************ 00:12:45.377 START TEST accel_decomp 00:12:45.377 ************************************ 00:12:45.377 04:55:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:45.377 04:55:15 -- accel/accel.sh@16 -- # local accel_opc 00:12:45.377 04:55:15 -- accel/accel.sh@17 -- # local accel_module 00:12:45.377 04:55:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:45.377 04:55:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:45.377 04:55:15 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.377 04:55:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:45.377 04:55:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.377 04:55:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.377 04:55:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:45.377 04:55:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:45.377 04:55:15 -- accel/accel.sh@41 -- # local IFS=, 00:12:45.377 04:55:15 -- accel/accel.sh@42 -- # jq -r . 00:12:45.377 [2024-04-27 04:55:15.135399] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:45.378 [2024-04-27 04:55:15.136142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120219 ] 00:12:45.636 [2024-04-27 04:55:15.296180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.636 [2024-04-27 04:55:15.432501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.012 04:55:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:47.012 00:12:47.012 SPDK Configuration: 00:12:47.012 Core mask: 0x1 00:12:47.012 00:12:47.012 Accel Perf Configuration: 00:12:47.012 Workload Type: decompress 00:12:47.012 Transfer size: 4096 bytes 00:12:47.012 Vector count 1 00:12:47.012 Module: software 00:12:47.012 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:47.012 Queue depth: 32 00:12:47.012 Allocate depth: 32 00:12:47.012 # threads/core: 1 00:12:47.012 Run time: 1 seconds 00:12:47.012 Verify: Yes 00:12:47.012 00:12:47.012 Running for 1 seconds... 00:12:47.012 00:12:47.012 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:47.012 ------------------------------------------------------------------------------------ 00:12:47.012 0,0 61728/s 113 MiB/s 0 0 00:12:47.012 ==================================================================================== 00:12:47.012 Total 61728/s 241 MiB/s 0 0' 00:12:47.012 04:55:16 -- accel/accel.sh@20 -- # IFS=: 00:12:47.012 04:55:16 -- accel/accel.sh@20 -- # read -r var val 00:12:47.012 04:55:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:47.012 04:55:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:47.012 04:55:16 -- accel/accel.sh@12 -- # build_accel_config 00:12:47.012 04:55:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:47.012 04:55:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.012 04:55:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.012 04:55:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:47.012 04:55:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:47.012 04:55:16 -- accel/accel.sh@41 -- # local IFS=, 00:12:47.012 04:55:16 -- accel/accel.sh@42 -- # jq -r . 00:12:47.012 [2024-04-27 04:55:16.879263] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:47.012 [2024-04-27 04:55:16.879566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120256 ] 00:12:47.271 [2024-04-27 04:55:17.048277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.529 [2024-04-27 04:55:17.174068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=0x1 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=decompress 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=software 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@23 -- # accel_module=software 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=32 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=32 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=1 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val=Yes 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.529 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.529 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.529 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:47.530 04:55:17 -- accel/accel.sh@21 -- # val= 00:12:47.530 04:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:12:47.530 04:55:17 -- accel/accel.sh@20 -- # IFS=: 00:12:47.530 04:55:17 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@21 -- # val= 00:12:48.905 04:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # IFS=: 00:12:48.905 04:55:18 -- accel/accel.sh@20 -- # read -r var val 00:12:48.905 04:55:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:48.905 04:55:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:48.905 04:55:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.905 00:12:48.905 real 0m3.507s 00:12:48.905 user 0m2.838s 00:12:48.905 sys 0m0.488s 00:12:48.905 04:55:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.906 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:12:48.906 ************************************ 00:12:48.906 END TEST accel_decomp 00:12:48.906 ************************************ 00:12:48.906 04:55:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:48.906 04:55:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:48.906 04:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.906 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:12:48.906 ************************************ 00:12:48.906 START TEST accel_decmop_full 00:12:48.906 ************************************ 00:12:48.906 04:55:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:48.906 04:55:18 -- accel/accel.sh@16 -- # local accel_opc 00:12:48.906 04:55:18 -- accel/accel.sh@17 -- # local accel_module 00:12:48.906 04:55:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:48.906 04:55:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:48.906 04:55:18 -- accel/accel.sh@12 -- # build_accel_config 00:12:48.906 04:55:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:48.906 04:55:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.906 04:55:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.906 04:55:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:48.906 04:55:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:48.906 04:55:18 -- accel/accel.sh@41 -- # local IFS=, 00:12:48.906 04:55:18 -- accel/accel.sh@42 -- # jq -r . 00:12:48.906 [2024-04-27 04:55:18.695735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:48.906 [2024-04-27 04:55:18.696732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120302 ] 00:12:49.164 [2024-04-27 04:55:18.883777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.164 [2024-04-27 04:55:18.991803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.537 04:55:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:50.537 00:12:50.537 SPDK Configuration: 00:12:50.537 Core mask: 0x1 00:12:50.537 00:12:50.537 Accel Perf Configuration: 00:12:50.537 Workload Type: decompress 00:12:50.537 Transfer size: 111250 bytes 00:12:50.537 Vector count 1 00:12:50.537 Module: software 00:12:50.537 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:50.537 Queue depth: 32 00:12:50.537 Allocate depth: 32 00:12:50.537 # threads/core: 1 00:12:50.537 Run time: 1 seconds 00:12:50.537 Verify: Yes 00:12:50.537 00:12:50.537 Running for 1 seconds... 00:12:50.537 00:12:50.537 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:50.537 ------------------------------------------------------------------------------------ 00:12:50.538 0,0 4512/s 186 MiB/s 0 0 00:12:50.538 ==================================================================================== 00:12:50.538 Total 4512/s 478 MiB/s 0 0' 00:12:50.538 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:50.538 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:50.538 04:55:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:50.538 04:55:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:50.538 04:55:20 -- accel/accel.sh@12 -- # build_accel_config 00:12:50.538 04:55:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:50.538 04:55:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.538 04:55:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.538 04:55:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:50.538 04:55:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:50.538 04:55:20 -- accel/accel.sh@41 -- # local IFS=, 00:12:50.538 04:55:20 -- accel/accel.sh@42 -- # jq -r . 00:12:50.796 [2024-04-27 04:55:20.447782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:50.796 [2024-04-27 04:55:20.448033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120331 ] 00:12:50.796 [2024-04-27 04:55:20.620232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.053 [2024-04-27 04:55:20.758401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.053 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.053 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.053 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.053 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.053 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.053 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.053 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=0x1 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=decompress 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=software 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@23 -- # accel_module=software 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=32 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=32 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=1 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val=Yes 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:51.054 04:55:20 -- accel/accel.sh@21 -- # val= 00:12:51.054 04:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # IFS=: 00:12:51.054 04:55:20 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@21 -- # val= 00:12:52.428 04:55:22 -- accel/accel.sh@22 -- # case "$var" in 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # IFS=: 00:12:52.428 04:55:22 -- accel/accel.sh@20 -- # read -r var val 00:12:52.428 04:55:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:52.428 04:55:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:52.429 04:55:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.429 00:12:52.429 real 0m3.544s 00:12:52.429 user 0m2.908s 00:12:52.429 sys 0m0.471s 00:12:52.429 04:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.429 ************************************ 00:12:52.429 END TEST accel_decmop_full 00:12:52.429 ************************************ 00:12:52.429 04:55:22 -- common/autotest_common.sh@10 -- # set +x 00:12:52.429 04:55:22 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.429 04:55:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:52.429 04:55:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.429 04:55:22 -- common/autotest_common.sh@10 -- # set +x 00:12:52.429 ************************************ 00:12:52.429 START TEST accel_decomp_mcore 00:12:52.429 ************************************ 00:12:52.429 04:55:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.429 04:55:22 -- accel/accel.sh@16 -- # local accel_opc 00:12:52.429 04:55:22 -- accel/accel.sh@17 -- # local accel_module 00:12:52.429 04:55:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.429 04:55:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:52.429 04:55:22 -- accel/accel.sh@12 -- # build_accel_config 00:12:52.429 04:55:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:52.429 04:55:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.429 04:55:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.429 04:55:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:52.429 04:55:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:52.429 04:55:22 -- accel/accel.sh@41 -- # local IFS=, 00:12:52.429 04:55:22 -- accel/accel.sh@42 -- # jq -r . 00:12:52.429 [2024-04-27 04:55:22.294235] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:52.429 [2024-04-27 04:55:22.294491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120369 ] 00:12:52.688 [2024-04-27 04:55:22.486430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.946 [2024-04-27 04:55:22.628595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.946 [2024-04-27 04:55:22.628751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.946 [2024-04-27 04:55:22.628894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.946 [2024-04-27 04:55:22.628900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.323 04:55:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:54.323 00:12:54.323 SPDK Configuration: 00:12:54.323 Core mask: 0xf 00:12:54.323 00:12:54.323 Accel Perf Configuration: 00:12:54.323 Workload Type: decompress 00:12:54.323 Transfer size: 4096 bytes 00:12:54.323 Vector count 1 00:12:54.323 Module: software 00:12:54.323 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:54.323 Queue depth: 32 00:12:54.323 Allocate depth: 32 00:12:54.323 # threads/core: 1 00:12:54.323 Run time: 1 seconds 00:12:54.323 Verify: Yes 00:12:54.323 00:12:54.323 Running for 1 seconds... 00:12:54.323 00:12:54.323 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:54.323 ------------------------------------------------------------------------------------ 00:12:54.323 0,0 50176/s 92 MiB/s 0 0 00:12:54.323 3,0 50912/s 93 MiB/s 0 0 00:12:54.323 2,0 51392/s 94 MiB/s 0 0 00:12:54.323 1,0 51072/s 94 MiB/s 0 0 00:12:54.323 ==================================================================================== 00:12:54.323 Total 203552/s 795 MiB/s 0 0' 00:12:54.323 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.323 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.323 04:55:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:54.323 04:55:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:54.323 04:55:24 -- accel/accel.sh@12 -- # build_accel_config 00:12:54.323 04:55:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:54.323 04:55:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:54.323 04:55:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:54.323 04:55:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:54.323 04:55:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:54.323 04:55:24 -- accel/accel.sh@41 -- # local IFS=, 00:12:54.323 04:55:24 -- accel/accel.sh@42 -- # jq -r . 00:12:54.323 [2024-04-27 04:55:24.080014] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:54.323 [2024-04-27 04:55:24.080252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120402 ] 00:12:54.582 [2024-04-27 04:55:24.270101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.582 [2024-04-27 04:55:24.411978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.582 [2024-04-27 04:55:24.412149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.582 [2024-04-27 04:55:24.412277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.582 [2024-04-27 04:55:24.412284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.877 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=0xf 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=decompress 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=software 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@23 -- # accel_module=software 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=32 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=32 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=1 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val=Yes 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:54.878 04:55:24 -- accel/accel.sh@21 -- # val= 00:12:54.878 04:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # IFS=: 00:12:54.878 04:55:24 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.290 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.290 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.290 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.291 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.291 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.291 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.291 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.291 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.291 04:55:25 -- accel/accel.sh@21 -- # val= 00:12:56.291 04:55:25 -- accel/accel.sh@22 -- # case "$var" in 00:12:56.291 04:55:25 -- accel/accel.sh@20 -- # IFS=: 00:12:56.291 04:55:25 -- accel/accel.sh@20 -- # read -r var val 00:12:56.291 04:55:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:56.291 04:55:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:56.291 04:55:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.291 00:12:56.291 real 0m3.639s 00:12:56.291 user 0m10.647s 00:12:56.291 sys 0m0.503s 00:12:56.291 04:55:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.291 ************************************ 00:12:56.291 END TEST accel_decomp_mcore 00:12:56.291 ************************************ 00:12:56.291 04:55:25 -- common/autotest_common.sh@10 -- # set +x 00:12:56.291 04:55:25 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:56.291 04:55:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:56.291 04:55:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.291 04:55:25 -- common/autotest_common.sh@10 -- # set +x 00:12:56.291 ************************************ 00:12:56.291 START TEST accel_decomp_full_mcore 00:12:56.291 ************************************ 00:12:56.291 04:55:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:56.291 04:55:25 -- accel/accel.sh@16 -- # local accel_opc 00:12:56.291 04:55:25 -- accel/accel.sh@17 -- # local accel_module 00:12:56.291 04:55:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:56.291 04:55:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:56.291 04:55:25 -- accel/accel.sh@12 -- # build_accel_config 00:12:56.291 04:55:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:56.291 04:55:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.291 04:55:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.291 04:55:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:56.291 04:55:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:56.291 04:55:25 -- accel/accel.sh@41 -- # local IFS=, 00:12:56.291 04:55:25 -- accel/accel.sh@42 -- # jq -r . 00:12:56.291 [2024-04-27 04:55:25.977903] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:56.291 [2024-04-27 04:55:25.978177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120452 ] 00:12:56.291 [2024-04-27 04:55:26.159094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.551 [2024-04-27 04:55:26.283695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.551 [2024-04-27 04:55:26.283828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.551 [2024-04-27 04:55:26.283968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.551 [2024-04-27 04:55:26.283970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.944 04:55:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:57.944 00:12:57.944 SPDK Configuration: 00:12:57.944 Core mask: 0xf 00:12:57.944 00:12:57.944 Accel Perf Configuration: 00:12:57.944 Workload Type: decompress 00:12:57.944 Transfer size: 111250 bytes 00:12:57.944 Vector count 1 00:12:57.944 Module: software 00:12:57.944 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:57.944 Queue depth: 32 00:12:57.944 Allocate depth: 32 00:12:57.944 # threads/core: 1 00:12:57.944 Run time: 1 seconds 00:12:57.944 Verify: Yes 00:12:57.944 00:12:57.944 Running for 1 seconds... 00:12:57.944 00:12:57.944 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:57.944 ------------------------------------------------------------------------------------ 00:12:57.944 0,0 4672/s 192 MiB/s 0 0 00:12:57.944 3,0 4640/s 191 MiB/s 0 0 00:12:57.944 2,0 4640/s 191 MiB/s 0 0 00:12:57.944 1,0 4608/s 190 MiB/s 0 0 00:12:57.944 ==================================================================================== 00:12:57.944 Total 18560/s 1969 MiB/s 0 0' 00:12:57.944 04:55:27 -- accel/accel.sh@20 -- # IFS=: 00:12:57.944 04:55:27 -- accel/accel.sh@20 -- # read -r var val 00:12:57.944 04:55:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:57.944 04:55:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:57.944 04:55:27 -- accel/accel.sh@12 -- # build_accel_config 00:12:57.944 04:55:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:57.944 04:55:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.944 04:55:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.944 04:55:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:57.944 04:55:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:57.944 04:55:27 -- accel/accel.sh@41 -- # local IFS=, 00:12:57.944 04:55:27 -- accel/accel.sh@42 -- # jq -r . 00:12:57.944 [2024-04-27 04:55:27.699033] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:57.944 [2024-04-27 04:55:27.699347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120490 ] 00:12:58.209 [2024-04-27 04:55:27.887191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.209 [2024-04-27 04:55:27.991470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.209 [2024-04-27 04:55:27.991587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.209 [2024-04-27 04:55:27.991729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.209 [2024-04-27 04:55:27.991730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=0xf 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=decompress 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=software 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@23 -- # accel_module=software 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=32 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=32 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=1 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val=Yes 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.209 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.209 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:58.209 04:55:28 -- accel/accel.sh@21 -- # val= 00:12:58.467 04:55:28 -- accel/accel.sh@22 -- # case "$var" in 00:12:58.467 04:55:28 -- accel/accel.sh@20 -- # IFS=: 00:12:58.467 04:55:28 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@21 -- # val= 00:12:59.844 04:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # IFS=: 00:12:59.844 04:55:29 -- accel/accel.sh@20 -- # read -r var val 00:12:59.844 04:55:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:59.844 04:55:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:59.844 04:55:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.844 00:12:59.844 real 0m3.449s 00:12:59.844 user 0m10.314s 00:12:59.844 sys 0m0.501s 00:12:59.844 ************************************ 00:12:59.844 END TEST accel_decomp_full_mcore 00:12:59.844 ************************************ 00:12:59.844 04:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.844 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:12:59.844 04:55:29 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.844 04:55:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:59.844 04:55:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.844 04:55:29 -- common/autotest_common.sh@10 -- # set +x 00:12:59.844 ************************************ 00:12:59.844 START TEST accel_decomp_mthread 00:12:59.844 ************************************ 00:12:59.844 04:55:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.844 04:55:29 -- accel/accel.sh@16 -- # local accel_opc 00:12:59.844 04:55:29 -- accel/accel.sh@17 -- # local accel_module 00:12:59.844 04:55:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.844 04:55:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:59.844 04:55:29 -- accel/accel.sh@12 -- # build_accel_config 00:12:59.844 04:55:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:59.844 04:55:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.844 04:55:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.844 04:55:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:59.844 04:55:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:59.844 04:55:29 -- accel/accel.sh@41 -- # local IFS=, 00:12:59.844 04:55:29 -- accel/accel.sh@42 -- # jq -r . 00:12:59.844 [2024-04-27 04:55:29.480688] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:59.844 [2024-04-27 04:55:29.481111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120526 ] 00:12:59.844 [2024-04-27 04:55:29.639676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.103 [2024-04-27 04:55:29.746373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.479 04:55:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:13:01.479 00:13:01.479 SPDK Configuration: 00:13:01.479 Core mask: 0x1 00:13:01.479 00:13:01.479 Accel Perf Configuration: 00:13:01.479 Workload Type: decompress 00:13:01.479 Transfer size: 4096 bytes 00:13:01.479 Vector count 1 00:13:01.479 Module: software 00:13:01.479 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:01.479 Queue depth: 32 00:13:01.479 Allocate depth: 32 00:13:01.479 # threads/core: 2 00:13:01.479 Run time: 1 seconds 00:13:01.479 Verify: Yes 00:13:01.479 00:13:01.480 Running for 1 seconds... 00:13:01.480 00:13:01.480 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:01.480 ------------------------------------------------------------------------------------ 00:13:01.480 0,1 34048/s 62 MiB/s 0 0 00:13:01.480 0,0 33920/s 62 MiB/s 0 0 00:13:01.480 ==================================================================================== 00:13:01.480 Total 67968/s 265 MiB/s 0 0' 00:13:01.480 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.480 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.480 04:55:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:01.480 04:55:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:01.480 04:55:31 -- accel/accel.sh@12 -- # build_accel_config 00:13:01.480 04:55:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:01.480 04:55:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.480 04:55:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.480 04:55:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:01.480 04:55:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:01.480 04:55:31 -- accel/accel.sh@41 -- # local IFS=, 00:13:01.480 04:55:31 -- accel/accel.sh@42 -- # jq -r . 00:13:01.480 [2024-04-27 04:55:31.137512] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:01.480 [2024-04-27 04:55:31.137961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120561 ] 00:13:01.480 [2024-04-27 04:55:31.308139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.739 [2024-04-27 04:55:31.408386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=0x1 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=decompress 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=software 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@23 -- # accel_module=software 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=32 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=32 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=2 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val=Yes 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:01.739 04:55:31 -- accel/accel.sh@21 -- # val= 00:13:01.739 04:55:31 -- accel/accel.sh@22 -- # case "$var" in 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # IFS=: 00:13:01.739 04:55:31 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 04:55:32 -- accel/accel.sh@21 -- # val= 00:13:03.118 04:55:32 -- accel/accel.sh@22 -- # case "$var" in 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # IFS=: 00:13:03.118 04:55:32 -- accel/accel.sh@20 -- # read -r var val 00:13:03.118 ************************************ 00:13:03.118 END TEST accel_decomp_mthread 00:13:03.118 ************************************ 00:13:03.118 04:55:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:03.118 04:55:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:13:03.118 04:55:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.118 00:13:03.118 real 0m3.359s 00:13:03.118 user 0m2.764s 00:13:03.118 sys 0m0.409s 00:13:03.118 04:55:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.118 04:55:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 04:55:32 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.118 04:55:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:03.118 04:55:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.118 04:55:32 -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 ************************************ 00:13:03.118 START TEST accel_deomp_full_mthread 00:13:03.118 ************************************ 00:13:03.118 04:55:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.118 04:55:32 -- accel/accel.sh@16 -- # local accel_opc 00:13:03.118 04:55:32 -- accel/accel.sh@17 -- # local accel_module 00:13:03.118 04:55:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.118 04:55:32 -- accel/accel.sh@12 -- # build_accel_config 00:13:03.118 04:55:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.118 04:55:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:03.118 04:55:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.118 04:55:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.118 04:55:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:03.118 04:55:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:03.118 04:55:32 -- accel/accel.sh@41 -- # local IFS=, 00:13:03.118 04:55:32 -- accel/accel.sh@42 -- # jq -r . 00:13:03.118 [2024-04-27 04:55:32.902003] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:03.118 [2024-04-27 04:55:32.903082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120601 ] 00:13:03.376 [2024-04-27 04:55:33.077389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.376 [2024-04-27 04:55:33.177513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.752 04:55:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:13:04.752 00:13:04.752 SPDK Configuration: 00:13:04.752 Core mask: 0x1 00:13:04.752 00:13:04.752 Accel Perf Configuration: 00:13:04.752 Workload Type: decompress 00:13:04.752 Transfer size: 111250 bytes 00:13:04.752 Vector count 1 00:13:04.752 Module: software 00:13:04.752 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.752 Queue depth: 32 00:13:04.752 Allocate depth: 32 00:13:04.752 # threads/core: 2 00:13:04.752 Run time: 1 seconds 00:13:04.752 Verify: Yes 00:13:04.752 00:13:04.752 Running for 1 seconds... 00:13:04.752 00:13:04.752 Core,Thread Transfers Bandwidth Failed Miscompares 00:13:04.752 ------------------------------------------------------------------------------------ 00:13:04.752 0,1 2464/s 101 MiB/s 0 0 00:13:04.752 0,0 2432/s 100 MiB/s 0 0 00:13:04.752 ==================================================================================== 00:13:04.752 Total 4896/s 519 MiB/s 0 0' 00:13:04.752 04:55:34 -- accel/accel.sh@20 -- # IFS=: 00:13:04.752 04:55:34 -- accel/accel.sh@20 -- # read -r var val 00:13:04.752 04:55:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:04.752 04:55:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:04.752 04:55:34 -- accel/accel.sh@12 -- # build_accel_config 00:13:04.752 04:55:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:04.752 04:55:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.752 04:55:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.752 04:55:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:04.752 04:55:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:04.752 04:55:34 -- accel/accel.sh@41 -- # local IFS=, 00:13:04.752 04:55:34 -- accel/accel.sh@42 -- # jq -r . 00:13:04.752 [2024-04-27 04:55:34.619273] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:04.752 [2024-04-27 04:55:34.619749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120636 ] 00:13:05.019 [2024-04-27 04:55:34.792052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.308 [2024-04-27 04:55:34.915302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=0x1 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=decompress 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=software 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@23 -- # accel_module=software 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=32 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=32 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=2 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val=Yes 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:05.308 04:55:35 -- accel/accel.sh@21 -- # val= 00:13:05.308 04:55:35 -- accel/accel.sh@22 -- # case "$var" in 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # IFS=: 00:13:05.308 04:55:35 -- accel/accel.sh@20 -- # read -r var val 00:13:06.683 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 04:55:36 -- accel/accel.sh@21 -- # val= 00:13:06.684 04:55:36 -- accel/accel.sh@22 -- # case "$var" in 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # IFS=: 00:13:06.684 04:55:36 -- accel/accel.sh@20 -- # read -r var val 00:13:06.684 ************************************ 00:13:06.684 END TEST accel_deomp_full_mthread 00:13:06.684 ************************************ 00:13:06.684 04:55:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:13:06.684 04:55:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:13:06.684 04:55:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.684 00:13:06.684 real 0m3.487s 00:13:06.684 user 0m2.857s 00:13:06.684 sys 0m0.444s 00:13:06.684 04:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.684 04:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.684 04:55:36 -- accel/accel.sh@116 -- # [[ n == y ]] 00:13:06.684 04:55:36 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:06.684 04:55:36 -- accel/accel.sh@129 -- # build_accel_config 00:13:06.684 04:55:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:13:06.684 04:55:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:06.684 04:55:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.684 04:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.684 04:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.684 04:55:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:06.684 04:55:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:13:06.684 04:55:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:13:06.684 04:55:36 -- accel/accel.sh@41 -- # local IFS=, 00:13:06.684 04:55:36 -- accel/accel.sh@42 -- # jq -r . 00:13:06.684 ************************************ 00:13:06.684 START TEST accel_dif_functional_tests 00:13:06.684 ************************************ 00:13:06.684 04:55:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:06.684 [2024-04-27 04:55:36.489087] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:06.684 [2024-04-27 04:55:36.489663] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120678 ] 00:13:06.942 [2024-04-27 04:55:36.671060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.942 [2024-04-27 04:55:36.800192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.942 [2024-04-27 04:55:36.800332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.942 [2024-04-27 04:55:36.800329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.201 00:13:07.201 00:13:07.201 CUnit - A unit testing framework for C - Version 2.1-3 00:13:07.201 http://cunit.sourceforge.net/ 00:13:07.201 00:13:07.201 00:13:07.201 Suite: accel_dif 00:13:07.201 Test: verify: DIF generated, GUARD check ...passed 00:13:07.201 Test: verify: DIF generated, APPTAG check ...passed 00:13:07.201 Test: verify: DIF generated, REFTAG check ...passed 00:13:07.201 Test: verify: DIF not generated, GUARD check ...[2024-04-27 04:55:36.948426] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:07.201 [2024-04-27 04:55:36.948763] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:07.201 passed 00:13:07.201 Test: verify: DIF not generated, APPTAG check ...[2024-04-27 04:55:36.949142] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:07.201 [2024-04-27 04:55:36.949405] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:07.201 passed 00:13:07.201 Test: verify: DIF not generated, REFTAG check ...[2024-04-27 04:55:36.949745] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:07.201 [2024-04-27 04:55:36.950065] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:07.201 passed 00:13:07.201 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:07.201 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-27 04:55:36.950686] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:07.201 passed 00:13:07.201 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:07.201 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:07.201 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:07.201 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-27 04:55:36.951788] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:07.201 passed 00:13:07.201 Test: generate copy: DIF generated, GUARD check ...passed 00:13:07.201 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:07.201 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:07.201 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:07.201 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:07.201 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:07.201 Test: generate copy: iovecs-len validate ...[2024-04-27 04:55:36.953564] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:07.201 passed 00:13:07.201 Test: generate copy: buffer alignment validate ...passed 00:13:07.201 00:13:07.201 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.201 suites 1 1 n/a 0 0 00:13:07.201 tests 20 20 20 0 0 00:13:07.201 asserts 204 204 204 0 n/a 00:13:07.201 00:13:07.201 Elapsed time = 0.018 seconds 00:13:07.460 ************************************ 00:13:07.460 END TEST accel_dif_functional_tests 00:13:07.460 ************************************ 00:13:07.460 00:13:07.460 real 0m0.936s 00:13:07.460 user 0m1.338s 00:13:07.460 sys 0m0.299s 00:13:07.460 04:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.460 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.718 00:13:07.718 real 1m23.809s 00:13:07.718 user 1m22.825s 00:13:07.718 sys 0m13.851s 00:13:07.718 04:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.718 ************************************ 00:13:07.718 END TEST accel 00:13:07.718 ************************************ 00:13:07.718 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.718 04:55:37 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:07.718 04:55:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:07.718 04:55:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.718 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.718 ************************************ 00:13:07.718 START TEST accel_rpc 00:13:07.718 ************************************ 00:13:07.718 04:55:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:07.718 * Looking for test storage... 00:13:07.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:07.718 04:55:37 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:07.718 04:55:37 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=120758 00:13:07.718 04:55:37 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:07.718 04:55:37 -- accel/accel_rpc.sh@15 -- # waitforlisten 120758 00:13:07.718 04:55:37 -- common/autotest_common.sh@819 -- # '[' -z 120758 ']' 00:13:07.718 04:55:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.718 04:55:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:07.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.718 04:55:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.718 04:55:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:07.718 04:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.718 [2024-04-27 04:55:37.585404] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:07.718 [2024-04-27 04:55:37.585705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120758 ] 00:13:07.977 [2024-04-27 04:55:37.748958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.977 [2024-04-27 04:55:37.864977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.977 [2024-04-27 04:55:37.865556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.913 04:55:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:08.913 04:55:38 -- common/autotest_common.sh@852 -- # return 0 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:08.913 04:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:08.913 04:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.913 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:08.913 ************************************ 00:13:08.913 START TEST accel_assign_opcode 00:13:08.913 ************************************ 00:13:08.913 04:55:38 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:08.913 04:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.913 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:08.913 [2024-04-27 04:55:38.518660] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:08.913 04:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:08.913 04:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.913 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:08.913 [2024-04-27 04:55:38.526625] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:08.913 04:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.913 04:55:38 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:08.913 04:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.913 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:09.171 04:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.171 04:55:38 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:09.171 04:55:38 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:09.171 04:55:38 -- accel/accel_rpc.sh@42 -- # grep software 00:13:09.171 04:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.171 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:09.171 04:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.171 software 00:13:09.171 00:13:09.171 real 0m0.407s 00:13:09.171 user 0m0.053s 00:13:09.171 sys 0m0.009s 00:13:09.171 04:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.171 ************************************ 00:13:09.171 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:09.171 END TEST accel_assign_opcode 00:13:09.171 ************************************ 00:13:09.171 04:55:38 -- accel/accel_rpc.sh@55 -- # killprocess 120758 00:13:09.171 04:55:38 -- common/autotest_common.sh@926 -- # '[' -z 120758 ']' 00:13:09.171 04:55:38 -- common/autotest_common.sh@930 -- # kill -0 120758 00:13:09.171 04:55:38 -- common/autotest_common.sh@931 -- # uname 00:13:09.171 04:55:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:09.171 04:55:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120758 00:13:09.171 04:55:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:09.171 04:55:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:09.171 04:55:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120758' 00:13:09.171 killing process with pid 120758 00:13:09.171 04:55:38 -- common/autotest_common.sh@945 -- # kill 120758 00:13:09.171 04:55:38 -- common/autotest_common.sh@950 -- # wait 120758 00:13:10.106 00:13:10.106 real 0m2.217s 00:13:10.106 user 0m2.092s 00:13:10.106 sys 0m0.630s 00:13:10.106 04:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.106 ************************************ 00:13:10.106 END TEST accel_rpc 00:13:10.106 ************************************ 00:13:10.106 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:10.106 04:55:39 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:10.106 04:55:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:10.106 04:55:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.106 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:10.106 ************************************ 00:13:10.106 START TEST app_cmdline 00:13:10.106 ************************************ 00:13:10.106 04:55:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:10.106 * Looking for test storage... 00:13:10.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:10.106 04:55:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:10.106 04:55:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=120873 00:13:10.106 04:55:39 -- app/cmdline.sh@18 -- # waitforlisten 120873 00:13:10.106 04:55:39 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:10.106 04:55:39 -- common/autotest_common.sh@819 -- # '[' -z 120873 ']' 00:13:10.106 04:55:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.106 04:55:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:10.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.106 04:55:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.106 04:55:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:10.106 04:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:10.106 [2024-04-27 04:55:39.857093] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:10.106 [2024-04-27 04:55:39.857382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120873 ] 00:13:10.365 [2024-04-27 04:55:40.022788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.365 [2024-04-27 04:55:40.140453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.365 [2024-04-27 04:55:40.140797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.932 04:55:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.932 04:55:40 -- common/autotest_common.sh@852 -- # return 0 00:13:10.932 04:55:40 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:11.191 { 00:13:11.191 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:13:11.191 "fields": { 00:13:11.191 "major": 24, 00:13:11.191 "minor": 1, 00:13:11.191 "patch": 1, 00:13:11.191 "suffix": "-pre", 00:13:11.191 "commit": "36faa8c31" 00:13:11.191 } 00:13:11.191 } 00:13:11.191 04:55:41 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:11.191 04:55:41 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:11.191 04:55:41 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:11.191 04:55:41 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:11.191 04:55:41 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:11.191 04:55:41 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:11.191 04:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.191 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:11.191 04:55:41 -- app/cmdline.sh@26 -- # sort 00:13:11.191 04:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.450 04:55:41 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:11.450 04:55:41 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:11.450 04:55:41 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:11.450 04:55:41 -- common/autotest_common.sh@640 -- # local es=0 00:13:11.450 04:55:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:11.450 04:55:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.450 04:55:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:11.450 04:55:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.450 04:55:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:11.450 04:55:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.450 04:55:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:11.450 04:55:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.450 04:55:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:11.450 04:55:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:11.450 request: 00:13:11.450 { 00:13:11.450 "method": "env_dpdk_get_mem_stats", 00:13:11.450 "req_id": 1 00:13:11.450 } 00:13:11.450 Got JSON-RPC error response 00:13:11.450 response: 00:13:11.450 { 00:13:11.450 "code": -32601, 00:13:11.450 "message": "Method not found" 00:13:11.450 } 00:13:11.450 04:55:41 -- common/autotest_common.sh@643 -- # es=1 00:13:11.450 04:55:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:11.450 04:55:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:11.450 04:55:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:11.450 04:55:41 -- app/cmdline.sh@1 -- # killprocess 120873 00:13:11.450 04:55:41 -- common/autotest_common.sh@926 -- # '[' -z 120873 ']' 00:13:11.450 04:55:41 -- common/autotest_common.sh@930 -- # kill -0 120873 00:13:11.450 04:55:41 -- common/autotest_common.sh@931 -- # uname 00:13:11.450 04:55:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.450 04:55:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120873 00:13:11.709 04:55:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.709 04:55:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.709 killing process with pid 120873 00:13:11.709 04:55:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120873' 00:13:11.709 04:55:41 -- common/autotest_common.sh@945 -- # kill 120873 00:13:11.709 04:55:41 -- common/autotest_common.sh@950 -- # wait 120873 00:13:12.277 00:13:12.277 real 0m2.300s 00:13:12.277 user 0m2.596s 00:13:12.277 sys 0m0.662s 00:13:12.277 04:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.277 ************************************ 00:13:12.277 04:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.277 END TEST app_cmdline 00:13:12.277 ************************************ 00:13:12.277 04:55:42 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:12.277 04:55:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:12.277 04:55:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.277 04:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.277 ************************************ 00:13:12.277 START TEST version 00:13:12.277 ************************************ 00:13:12.277 04:55:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:12.277 * Looking for test storage... 00:13:12.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:12.277 04:55:42 -- app/version.sh@17 -- # get_header_version major 00:13:12.277 04:55:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:12.277 04:55:42 -- app/version.sh@14 -- # cut -f2 00:13:12.277 04:55:42 -- app/version.sh@14 -- # tr -d '"' 00:13:12.277 04:55:42 -- app/version.sh@17 -- # major=24 00:13:12.277 04:55:42 -- app/version.sh@18 -- # get_header_version minor 00:13:12.277 04:55:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:12.277 04:55:42 -- app/version.sh@14 -- # cut -f2 00:13:12.277 04:55:42 -- app/version.sh@14 -- # tr -d '"' 00:13:12.277 04:55:42 -- app/version.sh@18 -- # minor=1 00:13:12.277 04:55:42 -- app/version.sh@19 -- # get_header_version patch 00:13:12.277 04:55:42 -- app/version.sh@14 -- # cut -f2 00:13:12.277 04:55:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:12.277 04:55:42 -- app/version.sh@14 -- # tr -d '"' 00:13:12.277 04:55:42 -- app/version.sh@19 -- # patch=1 00:13:12.277 04:55:42 -- app/version.sh@20 -- # get_header_version suffix 00:13:12.277 04:55:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:12.277 04:55:42 -- app/version.sh@14 -- # cut -f2 00:13:12.277 04:55:42 -- app/version.sh@14 -- # tr -d '"' 00:13:12.277 04:55:42 -- app/version.sh@20 -- # suffix=-pre 00:13:12.277 04:55:42 -- app/version.sh@22 -- # version=24.1 00:13:12.277 04:55:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:12.277 04:55:42 -- app/version.sh@25 -- # version=24.1.1 00:13:12.277 04:55:42 -- app/version.sh@28 -- # version=24.1.1rc0 00:13:12.277 04:55:42 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:12.277 04:55:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:12.536 04:55:42 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:13:12.536 04:55:42 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:13:12.536 00:13:12.536 real 0m0.141s 00:13:12.536 user 0m0.094s 00:13:12.536 sys 0m0.086s 00:13:12.536 04:55:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.536 04:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.536 ************************************ 00:13:12.536 END TEST version 00:13:12.536 ************************************ 00:13:12.536 04:55:42 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:13:12.536 04:55:42 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:12.536 04:55:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:12.536 04:55:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.536 04:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.536 ************************************ 00:13:12.536 START TEST blockdev_general 00:13:12.536 ************************************ 00:13:12.536 04:55:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:12.537 * Looking for test storage... 00:13:12.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:12.537 04:55:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:12.537 04:55:42 -- bdev/nbd_common.sh@6 -- # set -e 00:13:12.537 04:55:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:12.537 04:55:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.537 04:55:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:12.537 04:55:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:12.537 04:55:42 -- bdev/blockdev.sh@18 -- # : 00:13:12.537 04:55:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:13:12.537 04:55:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:13:12.537 04:55:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:13:12.537 04:55:42 -- bdev/blockdev.sh@672 -- # uname -s 00:13:12.537 04:55:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:13:12.537 04:55:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:13:12.537 04:55:42 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:13:12.537 04:55:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:13:12.537 04:55:42 -- bdev/blockdev.sh@682 -- # dek= 00:13:12.537 04:55:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:13:12.537 04:55:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:13:12.537 04:55:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:13:12.537 04:55:42 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:13:12.537 04:55:42 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:13:12.537 04:55:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:13:12.537 04:55:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=121024 00:13:12.537 04:55:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:12.537 04:55:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:12.537 04:55:42 -- bdev/blockdev.sh@47 -- # waitforlisten 121024 00:13:12.537 04:55:42 -- common/autotest_common.sh@819 -- # '[' -z 121024 ']' 00:13:12.537 04:55:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.537 04:55:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:12.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.537 04:55:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.537 04:55:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:12.537 04:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.537 [2024-04-27 04:55:42.401592] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:12.537 [2024-04-27 04:55:42.401861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121024 ] 00:13:12.796 [2024-04-27 04:55:42.573218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.796 [2024-04-27 04:55:42.685769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:12.796 [2024-04-27 04:55:42.686120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.729 04:55:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:13.729 04:55:43 -- common/autotest_common.sh@852 -- # return 0 00:13:13.730 04:55:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:13:13.730 04:55:43 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:13:13.730 04:55:43 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:13:13.730 04:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.730 04:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.987 [2024-04-27 04:55:43.735640] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:13.987 [2024-04-27 04:55:43.735787] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:13.987 00:13:13.987 [2024-04-27 04:55:43.743548] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:13.987 [2024-04-27 04:55:43.743641] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:13.987 00:13:13.987 Malloc0 00:13:13.987 Malloc1 00:13:13.987 Malloc2 00:13:13.987 Malloc3 00:13:13.987 Malloc4 00:13:14.244 Malloc5 00:13:14.244 Malloc6 00:13:14.244 Malloc7 00:13:14.244 Malloc8 00:13:14.244 Malloc9 00:13:14.244 [2024-04-27 04:55:43.972842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:14.244 [2024-04-27 04:55:43.973018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.244 [2024-04-27 04:55:43.973077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:14.244 [2024-04-27 04:55:43.973114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.244 [2024-04-27 04:55:43.976064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.244 [2024-04-27 04:55:43.976144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:14.244 TestPT 00:13:14.244 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.244 04:55:44 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:14.244 5000+0 records in 00:13:14.244 5000+0 records out 00:13:14.244 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0295945 s, 346 MB/s 00:13:14.244 04:55:44 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:14.244 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.244 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 AIO0 00:13:14.244 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.244 04:55:44 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:13:14.244 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.244 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.244 04:55:44 -- bdev/blockdev.sh@738 -- # cat 00:13:14.244 04:55:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:13:14.244 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.244 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.244 04:55:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:13:14.244 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.244 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.503 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.503 04:55:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:14.503 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.503 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.503 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.503 04:55:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:13:14.503 04:55:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:13:14.503 04:55:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:13:14.503 04:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.503 04:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:14.503 04:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.503 04:55:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:13:14.503 04:55:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:13:14.504 04:55:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "95e49f3e-60d6-44e6-affc-a590bd6835cb"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "95e49f3e-60d6-44e6-affc-a590bd6835cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "59942da2-35ae-52c3-ad6a-fe77b4482ba8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "59942da2-35ae-52c3-ad6a-fe77b4482ba8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "22d02607-ab96-5523-a0e2-3424145fc97d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "22d02607-ab96-5523-a0e2-3424145fc97d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ef351d09-fab9-55f6-9ece-3caed4fc3950"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef351d09-fab9-55f6-9ece-3caed4fc3950",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "57fbc074-46c1-5ab0-95ac-6432853f30f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57fbc074-46c1-5ab0-95ac-6432853f30f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c0ae75c1-7251-5cd9-b52f-eebf2870f494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0ae75c1-7251-5cd9-b52f-eebf2870f494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "696b5c80-4876-5603-af17-09088df3ab99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "696b5c80-4876-5603-af17-09088df3ab99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4992917e-b718-5ab3-a772-f90037bb6ebe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4992917e-b718-5ab3-a772-f90037bb6ebe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bf3a9708-5e7e-5255-9022-1366db6c987a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf3a9708-5e7e-5255-9022-1366db6c987a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6872e709-d0ee-4841-b0b7-cd843570b946"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "53684529-dd64-4f7c-a81f-f09420dc3fc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "870e42cb-2b67-466f-834d-09329177486b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7cbc1094-3db2-4000-aa31-16e00824d4f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7e4b16a0-5d03-4f03-98a5-160f0a01f266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5d51727b-b91f-4b0b-a3e7-896c6c588b5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "437a53d9-7e44-462a-83bd-f454a64cf843"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "db0ee6b6-88d2-4fd5-9247-6d3e3875a041",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6085a43f-9a4c-4692-bc0f-65c10c8f34a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98fb0ece-b8c5-4225-88cd-39cb40366a0e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98fb0ece-b8c5-4225-88cd-39cb40366a0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:14.504 04:55:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:13:14.504 04:55:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:13:14.504 04:55:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:13:14.504 04:55:44 -- bdev/blockdev.sh@752 -- # killprocess 121024 00:13:14.504 04:55:44 -- common/autotest_common.sh@926 -- # '[' -z 121024 ']' 00:13:14.504 04:55:44 -- common/autotest_common.sh@930 -- # kill -0 121024 00:13:14.504 04:55:44 -- common/autotest_common.sh@931 -- # uname 00:13:14.504 04:55:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.504 04:55:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121024 00:13:14.504 04:55:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:14.504 04:55:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:14.504 killing process with pid 121024 00:13:14.504 04:55:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121024' 00:13:14.504 04:55:44 -- common/autotest_common.sh@945 -- # kill 121024 00:13:14.504 04:55:44 -- common/autotest_common.sh@950 -- # wait 121024 00:13:15.436 04:55:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:15.436 04:55:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:15.436 04:55:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:13:15.436 04:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.436 04:55:45 -- common/autotest_common.sh@10 -- # set +x 00:13:15.436 ************************************ 00:13:15.436 START TEST bdev_hello_world 00:13:15.437 ************************************ 00:13:15.437 04:55:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:15.437 [2024-04-27 04:55:45.250399] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:15.437 [2024-04-27 04:55:45.250724] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121091 ] 00:13:15.694 [2024-04-27 04:55:45.420403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.694 [2024-04-27 04:55:45.526620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.952 [2024-04-27 04:55:45.716878] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:15.952 [2024-04-27 04:55:45.717033] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:15.952 [2024-04-27 04:55:45.724816] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:15.952 [2024-04-27 04:55:45.724912] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:15.953 [2024-04-27 04:55:45.732878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:15.953 [2024-04-27 04:55:45.733012] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:15.953 [2024-04-27 04:55:45.733063] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:16.213 [2024-04-27 04:55:45.851261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.213 [2024-04-27 04:55:45.851455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.213 [2024-04-27 04:55:45.851532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:16.213 [2024-04-27 04:55:45.851570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.213 [2024-04-27 04:55:45.854592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.213 [2024-04-27 04:55:45.854666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:16.213 [2024-04-27 04:55:46.066166] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:16.213 [2024-04-27 04:55:46.066275] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:16.213 [2024-04-27 04:55:46.066402] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:16.213 [2024-04-27 04:55:46.066499] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:16.213 [2024-04-27 04:55:46.066672] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:16.213 [2024-04-27 04:55:46.066715] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:16.213 [2024-04-27 04:55:46.066804] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:16.213 00:13:16.213 [2024-04-27 04:55:46.066889] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:16.778 00:13:16.779 real 0m1.445s 00:13:16.779 user 0m0.858s 00:13:16.779 sys 0m0.421s 00:13:16.779 04:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.779 04:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.779 ************************************ 00:13:16.779 END TEST bdev_hello_world 00:13:16.779 ************************************ 00:13:16.779 04:55:46 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:13:16.779 04:55:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:16.779 04:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.779 04:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.037 ************************************ 00:13:17.037 START TEST bdev_bounds 00:13:17.037 ************************************ 00:13:17.037 04:55:46 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:13:17.037 04:55:46 -- bdev/blockdev.sh@288 -- # bdevio_pid=121130 00:13:17.037 04:55:46 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:17.037 04:55:46 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:17.037 Process bdevio pid: 121130 00:13:17.037 04:55:46 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 121130' 00:13:17.037 04:55:46 -- bdev/blockdev.sh@291 -- # waitforlisten 121130 00:13:17.037 04:55:46 -- common/autotest_common.sh@819 -- # '[' -z 121130 ']' 00:13:17.037 04:55:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.037 04:55:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:17.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.037 04:55:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.037 04:55:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:17.037 04:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.037 [2024-04-27 04:55:46.755340] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:17.037 [2024-04-27 04:55:46.755608] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121130 ] 00:13:17.295 [2024-04-27 04:55:46.936051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.295 [2024-04-27 04:55:47.051524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.295 [2024-04-27 04:55:47.052094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.295 [2024-04-27 04:55:47.052136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.554 [2024-04-27 04:55:47.242733] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:17.554 [2024-04-27 04:55:47.242907] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:17.554 [2024-04-27 04:55:47.250634] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:17.554 [2024-04-27 04:55:47.250728] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:17.554 [2024-04-27 04:55:47.258725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:17.554 [2024-04-27 04:55:47.258833] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:17.554 [2024-04-27 04:55:47.258905] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:17.554 [2024-04-27 04:55:47.372769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:17.554 [2024-04-27 04:55:47.372922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.554 [2024-04-27 04:55:47.373008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:17.554 [2024-04-27 04:55:47.373047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.554 [2024-04-27 04:55:47.376322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.554 [2024-04-27 04:55:47.376386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:18.121 04:55:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:18.121 04:55:47 -- common/autotest_common.sh@852 -- # return 0 00:13:18.121 04:55:47 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:18.121 I/O targets: 00:13:18.121 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:18.121 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:18.121 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:18.121 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:18.121 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:18.121 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:18.121 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:18.121 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:18.121 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:18.121 00:13:18.121 00:13:18.121 CUnit - A unit testing framework for C - Version 2.1-3 00:13:18.121 http://cunit.sourceforge.net/ 00:13:18.121 00:13:18.121 00:13:18.121 Suite: bdevio tests on: AIO0 00:13:18.121 Test: blockdev write read block ...passed 00:13:18.121 Test: blockdev write zeroes read block ...passed 00:13:18.121 Test: blockdev write zeroes read no split ...passed 00:13:18.121 Test: blockdev write zeroes read split ...passed 00:13:18.121 Test: blockdev write zeroes read split partial ...passed 00:13:18.121 Test: blockdev reset ...passed 00:13:18.121 Test: blockdev write read 8 blocks ...passed 00:13:18.121 Test: blockdev write read size > 128k ...passed 00:13:18.121 Test: blockdev write read invalid size ...passed 00:13:18.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.121 Test: blockdev write read max offset ...passed 00:13:18.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.121 Test: blockdev writev readv 8 blocks ...passed 00:13:18.121 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.121 Test: blockdev writev readv block ...passed 00:13:18.121 Test: blockdev writev readv size > 128k ...passed 00:13:18.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.121 Test: blockdev comparev and writev ...passed 00:13:18.121 Test: blockdev nvme passthru rw ...passed 00:13:18.121 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.121 Test: blockdev nvme admin passthru ...passed 00:13:18.121 Test: blockdev copy ...passed 00:13:18.121 Suite: bdevio tests on: raid1 00:13:18.121 Test: blockdev write read block ...passed 00:13:18.121 Test: blockdev write zeroes read block ...passed 00:13:18.121 Test: blockdev write zeroes read no split ...passed 00:13:18.121 Test: blockdev write zeroes read split ...passed 00:13:18.121 Test: blockdev write zeroes read split partial ...passed 00:13:18.121 Test: blockdev reset ...passed 00:13:18.121 Test: blockdev write read 8 blocks ...passed 00:13:18.121 Test: blockdev write read size > 128k ...passed 00:13:18.121 Test: blockdev write read invalid size ...passed 00:13:18.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.121 Test: blockdev write read max offset ...passed 00:13:18.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.121 Test: blockdev writev readv 8 blocks ...passed 00:13:18.121 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.121 Test: blockdev writev readv block ...passed 00:13:18.121 Test: blockdev writev readv size > 128k ...passed 00:13:18.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.121 Test: blockdev comparev and writev ...passed 00:13:18.121 Test: blockdev nvme passthru rw ...passed 00:13:18.121 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.121 Test: blockdev nvme admin passthru ...passed 00:13:18.121 Test: blockdev copy ...passed 00:13:18.121 Suite: bdevio tests on: concat0 00:13:18.121 Test: blockdev write read block ...passed 00:13:18.121 Test: blockdev write zeroes read block ...passed 00:13:18.121 Test: blockdev write zeroes read no split ...passed 00:13:18.121 Test: blockdev write zeroes read split ...passed 00:13:18.121 Test: blockdev write zeroes read split partial ...passed 00:13:18.121 Test: blockdev reset ...passed 00:13:18.121 Test: blockdev write read 8 blocks ...passed 00:13:18.121 Test: blockdev write read size > 128k ...passed 00:13:18.121 Test: blockdev write read invalid size ...passed 00:13:18.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.121 Test: blockdev write read max offset ...passed 00:13:18.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.121 Test: blockdev writev readv 8 blocks ...passed 00:13:18.121 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.121 Test: blockdev writev readv block ...passed 00:13:18.121 Test: blockdev writev readv size > 128k ...passed 00:13:18.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.121 Test: blockdev comparev and writev ...passed 00:13:18.121 Test: blockdev nvme passthru rw ...passed 00:13:18.121 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.121 Test: blockdev nvme admin passthru ...passed 00:13:18.121 Test: blockdev copy ...passed 00:13:18.121 Suite: bdevio tests on: raid0 00:13:18.121 Test: blockdev write read block ...passed 00:13:18.121 Test: blockdev write zeroes read block ...passed 00:13:18.121 Test: blockdev write zeroes read no split ...passed 00:13:18.121 Test: blockdev write zeroes read split ...passed 00:13:18.121 Test: blockdev write zeroes read split partial ...passed 00:13:18.121 Test: blockdev reset ...passed 00:13:18.121 Test: blockdev write read 8 blocks ...passed 00:13:18.121 Test: blockdev write read size > 128k ...passed 00:13:18.121 Test: blockdev write read invalid size ...passed 00:13:18.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.121 Test: blockdev write read max offset ...passed 00:13:18.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.121 Test: blockdev writev readv 8 blocks ...passed 00:13:18.121 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.121 Test: blockdev writev readv block ...passed 00:13:18.121 Test: blockdev writev readv size > 128k ...passed 00:13:18.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.121 Test: blockdev comparev and writev ...passed 00:13:18.121 Test: blockdev nvme passthru rw ...passed 00:13:18.121 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.121 Test: blockdev nvme admin passthru ...passed 00:13:18.121 Test: blockdev copy ...passed 00:13:18.121 Suite: bdevio tests on: TestPT 00:13:18.121 Test: blockdev write read block ...passed 00:13:18.121 Test: blockdev write zeroes read block ...passed 00:13:18.121 Test: blockdev write zeroes read no split ...passed 00:13:18.121 Test: blockdev write zeroes read split ...passed 00:13:18.121 Test: blockdev write zeroes read split partial ...passed 00:13:18.121 Test: blockdev reset ...passed 00:13:18.380 Test: blockdev write read 8 blocks ...passed 00:13:18.380 Test: blockdev write read size > 128k ...passed 00:13:18.380 Test: blockdev write read invalid size ...passed 00:13:18.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.380 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p7 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p6 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p5 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p4 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p3 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p2 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.381 Test: blockdev nvme admin passthru ...passed 00:13:18.381 Test: blockdev copy ...passed 00:13:18.381 Suite: bdevio tests on: Malloc2p1 00:13:18.381 Test: blockdev write read block ...passed 00:13:18.381 Test: blockdev write zeroes read block ...passed 00:13:18.381 Test: blockdev write zeroes read no split ...passed 00:13:18.381 Test: blockdev write zeroes read split ...passed 00:13:18.381 Test: blockdev write zeroes read split partial ...passed 00:13:18.381 Test: blockdev reset ...passed 00:13:18.381 Test: blockdev write read 8 blocks ...passed 00:13:18.381 Test: blockdev write read size > 128k ...passed 00:13:18.381 Test: blockdev write read invalid size ...passed 00:13:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.381 Test: blockdev write read max offset ...passed 00:13:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.381 Test: blockdev writev readv 8 blocks ...passed 00:13:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.381 Test: blockdev writev readv block ...passed 00:13:18.381 Test: blockdev writev readv size > 128k ...passed 00:13:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.381 Test: blockdev comparev and writev ...passed 00:13:18.381 Test: blockdev nvme passthru rw ...passed 00:13:18.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.382 Test: blockdev nvme admin passthru ...passed 00:13:18.382 Test: blockdev copy ...passed 00:13:18.382 Suite: bdevio tests on: Malloc2p0 00:13:18.382 Test: blockdev write read block ...passed 00:13:18.382 Test: blockdev write zeroes read block ...passed 00:13:18.382 Test: blockdev write zeroes read no split ...passed 00:13:18.382 Test: blockdev write zeroes read split ...passed 00:13:18.382 Test: blockdev write zeroes read split partial ...passed 00:13:18.382 Test: blockdev reset ...passed 00:13:18.382 Test: blockdev write read 8 blocks ...passed 00:13:18.382 Test: blockdev write read size > 128k ...passed 00:13:18.382 Test: blockdev write read invalid size ...passed 00:13:18.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.382 Test: blockdev write read max offset ...passed 00:13:18.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.382 Test: blockdev writev readv 8 blocks ...passed 00:13:18.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.382 Test: blockdev writev readv block ...passed 00:13:18.382 Test: blockdev writev readv size > 128k ...passed 00:13:18.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.382 Test: blockdev comparev and writev ...passed 00:13:18.382 Test: blockdev nvme passthru rw ...passed 00:13:18.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.382 Test: blockdev nvme admin passthru ...passed 00:13:18.382 Test: blockdev copy ...passed 00:13:18.382 Suite: bdevio tests on: Malloc1p1 00:13:18.382 Test: blockdev write read block ...passed 00:13:18.382 Test: blockdev write zeroes read block ...passed 00:13:18.382 Test: blockdev write zeroes read no split ...passed 00:13:18.382 Test: blockdev write zeroes read split ...passed 00:13:18.382 Test: blockdev write zeroes read split partial ...passed 00:13:18.382 Test: blockdev reset ...passed 00:13:18.382 Test: blockdev write read 8 blocks ...passed 00:13:18.382 Test: blockdev write read size > 128k ...passed 00:13:18.382 Test: blockdev write read invalid size ...passed 00:13:18.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.382 Test: blockdev write read max offset ...passed 00:13:18.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.382 Test: blockdev writev readv 8 blocks ...passed 00:13:18.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.382 Test: blockdev writev readv block ...passed 00:13:18.382 Test: blockdev writev readv size > 128k ...passed 00:13:18.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.382 Test: blockdev comparev and writev ...passed 00:13:18.382 Test: blockdev nvme passthru rw ...passed 00:13:18.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.382 Test: blockdev nvme admin passthru ...passed 00:13:18.382 Test: blockdev copy ...passed 00:13:18.382 Suite: bdevio tests on: Malloc1p0 00:13:18.382 Test: blockdev write read block ...passed 00:13:18.382 Test: blockdev write zeroes read block ...passed 00:13:18.382 Test: blockdev write zeroes read no split ...passed 00:13:18.382 Test: blockdev write zeroes read split ...passed 00:13:18.382 Test: blockdev write zeroes read split partial ...passed 00:13:18.382 Test: blockdev reset ...passed 00:13:18.382 Test: blockdev write read 8 blocks ...passed 00:13:18.382 Test: blockdev write read size > 128k ...passed 00:13:18.382 Test: blockdev write read invalid size ...passed 00:13:18.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.382 Test: blockdev write read max offset ...passed 00:13:18.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.382 Test: blockdev writev readv 8 blocks ...passed 00:13:18.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.382 Test: blockdev writev readv block ...passed 00:13:18.382 Test: blockdev writev readv size > 128k ...passed 00:13:18.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.382 Test: blockdev comparev and writev ...passed 00:13:18.382 Test: blockdev nvme passthru rw ...passed 00:13:18.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.382 Test: blockdev nvme admin passthru ...passed 00:13:18.382 Test: blockdev copy ...passed 00:13:18.382 Suite: bdevio tests on: Malloc0 00:13:18.382 Test: blockdev write read block ...passed 00:13:18.382 Test: blockdev write zeroes read block ...passed 00:13:18.382 Test: blockdev write zeroes read no split ...passed 00:13:18.382 Test: blockdev write zeroes read split ...passed 00:13:18.382 Test: blockdev write zeroes read split partial ...passed 00:13:18.382 Test: blockdev reset ...passed 00:13:18.382 Test: blockdev write read 8 blocks ...passed 00:13:18.382 Test: blockdev write read size > 128k ...passed 00:13:18.382 Test: blockdev write read invalid size ...passed 00:13:18.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.382 Test: blockdev write read max offset ...passed 00:13:18.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.382 Test: blockdev writev readv 8 blocks ...passed 00:13:18.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.382 Test: blockdev writev readv block ...passed 00:13:18.382 Test: blockdev writev readv size > 128k ...passed 00:13:18.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.382 Test: blockdev comparev and writev ...passed 00:13:18.382 Test: blockdev nvme passthru rw ...passed 00:13:18.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.382 Test: blockdev nvme admin passthru ...passed 00:13:18.382 Test: blockdev copy ...passed 00:13:18.382 00:13:18.382 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.382 suites 16 16 n/a 0 0 00:13:18.382 tests 368 368 368 0 0 00:13:18.382 asserts 2224 2224 2224 0 n/a 00:13:18.382 00:13:18.382 Elapsed time = 0.797 seconds 00:13:18.382 0 00:13:18.640 04:55:48 -- bdev/blockdev.sh@293 -- # killprocess 121130 00:13:18.640 04:55:48 -- common/autotest_common.sh@926 -- # '[' -z 121130 ']' 00:13:18.640 04:55:48 -- common/autotest_common.sh@930 -- # kill -0 121130 00:13:18.640 04:55:48 -- common/autotest_common.sh@931 -- # uname 00:13:18.640 04:55:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.640 04:55:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121130 00:13:18.640 04:55:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.641 04:55:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.641 04:55:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121130' 00:13:18.641 killing process with pid 121130 00:13:18.641 04:55:48 -- common/autotest_common.sh@945 -- # kill 121130 00:13:18.641 04:55:48 -- common/autotest_common.sh@950 -- # wait 121130 00:13:19.208 04:55:48 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:13:19.208 00:13:19.208 real 0m2.213s 00:13:19.208 user 0m5.104s 00:13:19.208 sys 0m0.679s 00:13:19.208 04:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.208 04:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:19.208 ************************************ 00:13:19.208 END TEST bdev_bounds 00:13:19.208 ************************************ 00:13:19.208 04:55:48 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:19.208 04:55:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:19.208 04:55:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.208 04:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:19.208 ************************************ 00:13:19.208 START TEST bdev_nbd 00:13:19.208 ************************************ 00:13:19.209 04:55:48 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:19.209 04:55:48 -- bdev/blockdev.sh@298 -- # uname -s 00:13:19.209 04:55:48 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:13:19.209 04:55:48 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.209 04:55:48 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:19.209 04:55:48 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:19.209 04:55:48 -- bdev/blockdev.sh@302 -- # local bdev_all 00:13:19.209 04:55:48 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:13:19.209 04:55:48 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:13:19.209 04:55:48 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:19.209 04:55:48 -- bdev/blockdev.sh@309 -- # local nbd_all 00:13:19.209 04:55:48 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:13:19.209 04:55:48 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:19.209 04:55:48 -- bdev/blockdev.sh@312 -- # local nbd_list 00:13:19.209 04:55:48 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:19.209 04:55:48 -- bdev/blockdev.sh@313 -- # local bdev_list 00:13:19.209 04:55:48 -- bdev/blockdev.sh@316 -- # nbd_pid=121200 00:13:19.209 04:55:48 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:19.209 04:55:48 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:19.209 04:55:48 -- bdev/blockdev.sh@318 -- # waitforlisten 121200 /var/tmp/spdk-nbd.sock 00:13:19.209 04:55:48 -- common/autotest_common.sh@819 -- # '[' -z 121200 ']' 00:13:19.209 04:55:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:19.209 04:55:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:19.209 04:55:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:19.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:19.209 04:55:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:19.209 04:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:19.209 [2024-04-27 04:55:49.026837] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:19.209 [2024-04-27 04:55:49.027318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.468 [2024-04-27 04:55:49.188867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.468 [2024-04-27 04:55:49.324396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.727 [2024-04-27 04:55:49.532899] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:19.727 [2024-04-27 04:55:49.533057] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:19.727 [2024-04-27 04:55:49.540829] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:19.727 [2024-04-27 04:55:49.540969] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:19.727 [2024-04-27 04:55:49.548870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:19.727 [2024-04-27 04:55:49.549013] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:19.727 [2024-04-27 04:55:49.549060] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:19.984 [2024-04-27 04:55:49.674202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:19.984 [2024-04-27 04:55:49.674397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.985 [2024-04-27 04:55:49.674496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:19.985 [2024-04-27 04:55:49.674538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.985 [2024-04-27 04:55:49.677626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.985 [2024-04-27 04:55:49.677709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:20.920 04:55:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.920 04:55:50 -- common/autotest_common.sh@852 -- # return 0 00:13:20.920 04:55:50 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@24 -- # local i 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:20.921 04:55:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:21.180 04:55:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:21.180 04:55:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:21.180 04:55:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:21.180 04:55:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:21.180 04:55:50 -- common/autotest_common.sh@857 -- # local i 00:13:21.180 04:55:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:21.180 04:55:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:21.180 04:55:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:21.180 04:55:50 -- common/autotest_common.sh@861 -- # break 00:13:21.180 04:55:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:21.180 04:55:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:21.180 04:55:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.180 1+0 records in 00:13:21.180 1+0 records out 00:13:21.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407916 s, 10.0 MB/s 00:13:21.180 04:55:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.180 04:55:50 -- common/autotest_common.sh@874 -- # size=4096 00:13:21.180 04:55:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.180 04:55:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:21.180 04:55:50 -- common/autotest_common.sh@877 -- # return 0 00:13:21.180 04:55:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.181 04:55:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.181 04:55:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:21.440 04:55:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:21.440 04:55:51 -- common/autotest_common.sh@857 -- # local i 00:13:21.440 04:55:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:21.440 04:55:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:21.440 04:55:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:21.440 04:55:51 -- common/autotest_common.sh@861 -- # break 00:13:21.440 04:55:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:21.440 04:55:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:21.440 04:55:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.440 1+0 records in 00:13:21.440 1+0 records out 00:13:21.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414417 s, 9.9 MB/s 00:13:21.440 04:55:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.440 04:55:51 -- common/autotest_common.sh@874 -- # size=4096 00:13:21.440 04:55:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.440 04:55:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:21.440 04:55:51 -- common/autotest_common.sh@877 -- # return 0 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.440 04:55:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:22.008 04:55:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:13:22.008 04:55:51 -- common/autotest_common.sh@857 -- # local i 00:13:22.008 04:55:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:22.008 04:55:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:22.008 04:55:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:13:22.008 04:55:51 -- common/autotest_common.sh@861 -- # break 00:13:22.008 04:55:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:22.008 04:55:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:22.008 04:55:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.008 1+0 records in 00:13:22.008 1+0 records out 00:13:22.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421531 s, 9.7 MB/s 00:13:22.008 04:55:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.008 04:55:51 -- common/autotest_common.sh@874 -- # size=4096 00:13:22.008 04:55:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.008 04:55:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:22.008 04:55:51 -- common/autotest_common.sh@877 -- # return 0 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.008 04:55:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:22.268 04:55:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:13:22.268 04:55:51 -- common/autotest_common.sh@857 -- # local i 00:13:22.268 04:55:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:22.268 04:55:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:22.268 04:55:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:13:22.268 04:55:51 -- common/autotest_common.sh@861 -- # break 00:13:22.268 04:55:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:22.268 04:55:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:22.268 04:55:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.268 1+0 records in 00:13:22.268 1+0 records out 00:13:22.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462513 s, 8.9 MB/s 00:13:22.268 04:55:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.268 04:55:51 -- common/autotest_common.sh@874 -- # size=4096 00:13:22.268 04:55:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.268 04:55:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:22.268 04:55:51 -- common/autotest_common.sh@877 -- # return 0 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.268 04:55:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:22.526 04:55:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:13:22.526 04:55:52 -- common/autotest_common.sh@857 -- # local i 00:13:22.526 04:55:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:22.526 04:55:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:22.526 04:55:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:13:22.526 04:55:52 -- common/autotest_common.sh@861 -- # break 00:13:22.526 04:55:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:22.526 04:55:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:22.526 04:55:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.526 1+0 records in 00:13:22.526 1+0 records out 00:13:22.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608399 s, 6.7 MB/s 00:13:22.526 04:55:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.526 04:55:52 -- common/autotest_common.sh@874 -- # size=4096 00:13:22.526 04:55:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.526 04:55:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:22.526 04:55:52 -- common/autotest_common.sh@877 -- # return 0 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.526 04:55:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:22.785 04:55:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:13:22.785 04:55:52 -- common/autotest_common.sh@857 -- # local i 00:13:22.785 04:55:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:22.785 04:55:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:22.785 04:55:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:13:22.785 04:55:52 -- common/autotest_common.sh@861 -- # break 00:13:22.785 04:55:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:22.785 04:55:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:22.785 04:55:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.785 1+0 records in 00:13:22.785 1+0 records out 00:13:22.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520678 s, 7.9 MB/s 00:13:22.785 04:55:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.785 04:55:52 -- common/autotest_common.sh@874 -- # size=4096 00:13:22.785 04:55:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.785 04:55:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:22.785 04:55:52 -- common/autotest_common.sh@877 -- # return 0 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.785 04:55:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:23.351 04:55:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:13:23.351 04:55:52 -- common/autotest_common.sh@857 -- # local i 00:13:23.351 04:55:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:23.351 04:55:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:23.351 04:55:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:13:23.351 04:55:52 -- common/autotest_common.sh@861 -- # break 00:13:23.351 04:55:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:23.351 04:55:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:23.351 04:55:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.351 1+0 records in 00:13:23.351 1+0 records out 00:13:23.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430621 s, 9.5 MB/s 00:13:23.351 04:55:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.351 04:55:52 -- common/autotest_common.sh@874 -- # size=4096 00:13:23.351 04:55:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.351 04:55:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:23.351 04:55:52 -- common/autotest_common.sh@877 -- # return 0 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.351 04:55:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:23.608 04:55:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:13:23.608 04:55:53 -- common/autotest_common.sh@857 -- # local i 00:13:23.608 04:55:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:23.608 04:55:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:23.608 04:55:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:13:23.608 04:55:53 -- common/autotest_common.sh@861 -- # break 00:13:23.608 04:55:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:23.608 04:55:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:23.608 04:55:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.608 1+0 records in 00:13:23.608 1+0 records out 00:13:23.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583364 s, 7.0 MB/s 00:13:23.608 04:55:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.608 04:55:53 -- common/autotest_common.sh@874 -- # size=4096 00:13:23.608 04:55:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.608 04:55:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:23.608 04:55:53 -- common/autotest_common.sh@877 -- # return 0 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.608 04:55:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:23.881 04:55:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:23.881 04:55:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:23.881 04:55:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:23.881 04:55:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:13:23.881 04:55:53 -- common/autotest_common.sh@857 -- # local i 00:13:23.881 04:55:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:23.881 04:55:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:23.881 04:55:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:13:23.881 04:55:53 -- common/autotest_common.sh@861 -- # break 00:13:23.881 04:55:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:23.881 04:55:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:23.881 04:55:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.881 1+0 records in 00:13:23.881 1+0 records out 00:13:23.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642249 s, 6.4 MB/s 00:13:23.881 04:55:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.881 04:55:53 -- common/autotest_common.sh@874 -- # size=4096 00:13:23.881 04:55:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.881 04:55:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:23.881 04:55:53 -- common/autotest_common.sh@877 -- # return 0 00:13:23.881 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.882 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.882 04:55:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:24.139 04:55:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:24.139 04:55:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:24.139 04:55:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:24.139 04:55:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:13:24.139 04:55:53 -- common/autotest_common.sh@857 -- # local i 00:13:24.139 04:55:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:24.139 04:55:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:24.139 04:55:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:13:24.139 04:55:53 -- common/autotest_common.sh@861 -- # break 00:13:24.139 04:55:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:24.140 04:55:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:24.140 04:55:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.140 1+0 records in 00:13:24.140 1+0 records out 00:13:24.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708624 s, 5.8 MB/s 00:13:24.140 04:55:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.140 04:55:53 -- common/autotest_common.sh@874 -- # size=4096 00:13:24.140 04:55:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.140 04:55:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:24.140 04:55:53 -- common/autotest_common.sh@877 -- # return 0 00:13:24.140 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.140 04:55:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.140 04:55:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:24.398 04:55:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:13:24.398 04:55:54 -- common/autotest_common.sh@857 -- # local i 00:13:24.398 04:55:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:24.398 04:55:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:24.398 04:55:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:13:24.398 04:55:54 -- common/autotest_common.sh@861 -- # break 00:13:24.398 04:55:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:24.398 04:55:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:24.398 04:55:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.398 1+0 records in 00:13:24.398 1+0 records out 00:13:24.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000899392 s, 4.6 MB/s 00:13:24.398 04:55:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.398 04:55:54 -- common/autotest_common.sh@874 -- # size=4096 00:13:24.398 04:55:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.398 04:55:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:24.398 04:55:54 -- common/autotest_common.sh@877 -- # return 0 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.398 04:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:24.658 04:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:24.658 04:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:24.658 04:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:24.658 04:55:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:13:24.658 04:55:54 -- common/autotest_common.sh@857 -- # local i 00:13:24.658 04:55:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:24.658 04:55:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:24.658 04:55:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:13:24.658 04:55:54 -- common/autotest_common.sh@861 -- # break 00:13:24.658 04:55:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:24.658 04:55:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:24.658 04:55:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.658 1+0 records in 00:13:24.658 1+0 records out 00:13:24.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600232 s, 6.8 MB/s 00:13:24.658 04:55:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.658 04:55:54 -- common/autotest_common.sh@874 -- # size=4096 00:13:24.658 04:55:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.658 04:55:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:24.658 04:55:54 -- common/autotest_common.sh@877 -- # return 0 00:13:24.658 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.658 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.659 04:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:24.918 04:55:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:13:24.918 04:55:54 -- common/autotest_common.sh@857 -- # local i 00:13:24.918 04:55:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:24.918 04:55:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:24.918 04:55:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:13:24.918 04:55:54 -- common/autotest_common.sh@861 -- # break 00:13:24.918 04:55:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:24.918 04:55:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:24.918 04:55:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.918 1+0 records in 00:13:24.918 1+0 records out 00:13:24.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787338 s, 5.2 MB/s 00:13:24.918 04:55:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.918 04:55:54 -- common/autotest_common.sh@874 -- # size=4096 00:13:24.918 04:55:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.918 04:55:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:24.918 04:55:54 -- common/autotest_common.sh@877 -- # return 0 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.918 04:55:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:25.485 04:55:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:13:25.485 04:55:55 -- common/autotest_common.sh@857 -- # local i 00:13:25.485 04:55:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:25.485 04:55:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:25.485 04:55:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:13:25.485 04:55:55 -- common/autotest_common.sh@861 -- # break 00:13:25.485 04:55:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:25.485 04:55:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:25.485 04:55:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.485 1+0 records in 00:13:25.485 1+0 records out 00:13:25.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845142 s, 4.8 MB/s 00:13:25.485 04:55:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.485 04:55:55 -- common/autotest_common.sh@874 -- # size=4096 00:13:25.485 04:55:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.485 04:55:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:25.485 04:55:55 -- common/autotest_common.sh@877 -- # return 0 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:25.485 04:55:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:25.744 04:55:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:13:25.744 04:55:55 -- common/autotest_common.sh@857 -- # local i 00:13:25.744 04:55:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:25.744 04:55:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:25.744 04:55:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:13:25.744 04:55:55 -- common/autotest_common.sh@861 -- # break 00:13:25.744 04:55:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:25.744 04:55:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:25.744 04:55:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.744 1+0 records in 00:13:25.744 1+0 records out 00:13:25.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961678 s, 4.3 MB/s 00:13:25.744 04:55:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.744 04:55:55 -- common/autotest_common.sh@874 -- # size=4096 00:13:25.744 04:55:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.744 04:55:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:25.744 04:55:55 -- common/autotest_common.sh@877 -- # return 0 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:25.744 04:55:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:26.003 04:55:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:13:26.003 04:55:55 -- common/autotest_common.sh@857 -- # local i 00:13:26.003 04:55:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:26.003 04:55:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:26.003 04:55:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:13:26.003 04:55:55 -- common/autotest_common.sh@861 -- # break 00:13:26.003 04:55:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:26.003 04:55:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:26.003 04:55:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.003 1+0 records in 00:13:26.003 1+0 records out 00:13:26.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00324348 s, 1.3 MB/s 00:13:26.003 04:55:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.003 04:55:55 -- common/autotest_common.sh@874 -- # size=4096 00:13:26.003 04:55:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.003 04:55:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:26.003 04:55:55 -- common/autotest_common.sh@877 -- # return 0 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:26.003 04:55:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd0", 00:13:26.262 "bdev_name": "Malloc0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd1", 00:13:26.262 "bdev_name": "Malloc1p0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd2", 00:13:26.262 "bdev_name": "Malloc1p1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd3", 00:13:26.262 "bdev_name": "Malloc2p0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd4", 00:13:26.262 "bdev_name": "Malloc2p1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd5", 00:13:26.262 "bdev_name": "Malloc2p2" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd6", 00:13:26.262 "bdev_name": "Malloc2p3" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd7", 00:13:26.262 "bdev_name": "Malloc2p4" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd8", 00:13:26.262 "bdev_name": "Malloc2p5" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd9", 00:13:26.262 "bdev_name": "Malloc2p6" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd10", 00:13:26.262 "bdev_name": "Malloc2p7" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd11", 00:13:26.262 "bdev_name": "TestPT" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd12", 00:13:26.262 "bdev_name": "raid0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd13", 00:13:26.262 "bdev_name": "concat0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd14", 00:13:26.262 "bdev_name": "raid1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd15", 00:13:26.262 "bdev_name": "AIO0" 00:13:26.262 } 00:13:26.262 ]' 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd0", 00:13:26.262 "bdev_name": "Malloc0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd1", 00:13:26.262 "bdev_name": "Malloc1p0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd2", 00:13:26.262 "bdev_name": "Malloc1p1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd3", 00:13:26.262 "bdev_name": "Malloc2p0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd4", 00:13:26.262 "bdev_name": "Malloc2p1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd5", 00:13:26.262 "bdev_name": "Malloc2p2" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd6", 00:13:26.262 "bdev_name": "Malloc2p3" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd7", 00:13:26.262 "bdev_name": "Malloc2p4" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd8", 00:13:26.262 "bdev_name": "Malloc2p5" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd9", 00:13:26.262 "bdev_name": "Malloc2p6" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd10", 00:13:26.262 "bdev_name": "Malloc2p7" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd11", 00:13:26.262 "bdev_name": "TestPT" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd12", 00:13:26.262 "bdev_name": "raid0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd13", 00:13:26.262 "bdev_name": "concat0" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd14", 00:13:26.262 "bdev_name": "raid1" 00:13:26.262 }, 00:13:26.262 { 00:13:26.262 "nbd_device": "/dev/nbd15", 00:13:26.262 "bdev_name": "AIO0" 00:13:26.262 } 00:13:26.262 ]' 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@51 -- # local i 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.262 04:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@41 -- # break 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.520 04:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@41 -- # break 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.778 04:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@41 -- # break 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.344 04:55:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@41 -- # break 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.344 04:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@41 -- # break 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.602 04:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@41 -- # break 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.859 04:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@41 -- # break 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.116 04:55:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@41 -- # break 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.374 04:55:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@41 -- # break 00:13:28.955 04:55:58 -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@41 -- # break 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.956 04:55:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@41 -- # break 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.213 04:55:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@41 -- # break 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.471 04:55:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:29.727 04:55:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@41 -- # break 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.728 04:55:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:29.984 04:55:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:29.984 04:55:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@41 -- # break 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.985 04:55:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@41 -- # break 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@41 -- # break 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.551 04:56:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@65 -- # true 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@65 -- # count=0 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@122 -- # count=0 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@127 -- # return 0 00:13:30.887 04:56:00 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@12 -- # local i 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:30.887 04:56:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:31.453 /dev/nbd0 00:13:31.453 04:56:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.453 04:56:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.453 04:56:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:31.453 04:56:01 -- common/autotest_common.sh@857 -- # local i 00:13:31.453 04:56:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.453 04:56:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.453 04:56:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:31.453 04:56:01 -- common/autotest_common.sh@861 -- # break 00:13:31.453 04:56:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.453 04:56:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.453 04:56:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.453 1+0 records in 00:13:31.453 1+0 records out 00:13:31.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659183 s, 6.2 MB/s 00:13:31.453 04:56:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.453 04:56:01 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.453 04:56:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.453 04:56:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.453 04:56:01 -- common/autotest_common.sh@877 -- # return 0 00:13:31.453 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.453 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.453 04:56:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:31.711 /dev/nbd1 00:13:31.711 04:56:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.711 04:56:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.711 04:56:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:31.711 04:56:01 -- common/autotest_common.sh@857 -- # local i 00:13:31.711 04:56:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.711 04:56:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.711 04:56:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:31.711 04:56:01 -- common/autotest_common.sh@861 -- # break 00:13:31.711 04:56:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.711 04:56:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.711 04:56:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.711 1+0 records in 00:13:31.711 1+0 records out 00:13:31.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627276 s, 6.5 MB/s 00:13:31.711 04:56:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.711 04:56:01 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.711 04:56:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.711 04:56:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.711 04:56:01 -- common/autotest_common.sh@877 -- # return 0 00:13:31.711 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.711 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.711 04:56:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:31.969 /dev/nbd10 00:13:31.969 04:56:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:31.969 04:56:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:31.969 04:56:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:13:31.969 04:56:01 -- common/autotest_common.sh@857 -- # local i 00:13:31.969 04:56:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.969 04:56:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.969 04:56:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:13:31.969 04:56:01 -- common/autotest_common.sh@861 -- # break 00:13:31.969 04:56:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.969 04:56:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.969 04:56:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.969 1+0 records in 00:13:31.969 1+0 records out 00:13:31.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075399 s, 5.4 MB/s 00:13:31.969 04:56:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.969 04:56:01 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.969 04:56:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.969 04:56:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.969 04:56:01 -- common/autotest_common.sh@877 -- # return 0 00:13:31.969 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.969 04:56:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.969 04:56:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:32.227 /dev/nbd11 00:13:32.227 04:56:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:32.227 04:56:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:32.227 04:56:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:13:32.227 04:56:02 -- common/autotest_common.sh@857 -- # local i 00:13:32.227 04:56:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:32.227 04:56:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:32.227 04:56:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:13:32.227 04:56:02 -- common/autotest_common.sh@861 -- # break 00:13:32.227 04:56:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:32.227 04:56:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:32.227 04:56:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.227 1+0 records in 00:13:32.227 1+0 records out 00:13:32.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606339 s, 6.8 MB/s 00:13:32.227 04:56:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.227 04:56:02 -- common/autotest_common.sh@874 -- # size=4096 00:13:32.227 04:56:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.227 04:56:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:32.227 04:56:02 -- common/autotest_common.sh@877 -- # return 0 00:13:32.227 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.227 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:32.227 04:56:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:32.485 /dev/nbd12 00:13:32.485 04:56:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:32.485 04:56:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:32.485 04:56:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:13:32.485 04:56:02 -- common/autotest_common.sh@857 -- # local i 00:13:32.485 04:56:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:32.485 04:56:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:32.485 04:56:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:13:32.485 04:56:02 -- common/autotest_common.sh@861 -- # break 00:13:32.485 04:56:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:32.485 04:56:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:32.485 04:56:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.485 1+0 records in 00:13:32.485 1+0 records out 00:13:32.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871824 s, 4.7 MB/s 00:13:32.485 04:56:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.485 04:56:02 -- common/autotest_common.sh@874 -- # size=4096 00:13:32.485 04:56:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.485 04:56:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:32.485 04:56:02 -- common/autotest_common.sh@877 -- # return 0 00:13:32.485 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.485 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:32.485 04:56:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:33.051 /dev/nbd13 00:13:33.051 04:56:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:33.051 04:56:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:33.051 04:56:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:13:33.051 04:56:02 -- common/autotest_common.sh@857 -- # local i 00:13:33.051 04:56:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:33.051 04:56:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:33.051 04:56:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:13:33.051 04:56:02 -- common/autotest_common.sh@861 -- # break 00:13:33.051 04:56:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:33.051 04:56:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:33.051 04:56:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.051 1+0 records in 00:13:33.051 1+0 records out 00:13:33.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433117 s, 9.5 MB/s 00:13:33.051 04:56:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.051 04:56:02 -- common/autotest_common.sh@874 -- # size=4096 00:13:33.051 04:56:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.051 04:56:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:33.051 04:56:02 -- common/autotest_common.sh@877 -- # return 0 00:13:33.051 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.051 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.051 04:56:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:33.051 /dev/nbd14 00:13:33.309 04:56:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:33.309 04:56:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:33.309 04:56:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:13:33.309 04:56:02 -- common/autotest_common.sh@857 -- # local i 00:13:33.309 04:56:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:33.309 04:56:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:33.309 04:56:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:13:33.309 04:56:02 -- common/autotest_common.sh@861 -- # break 00:13:33.309 04:56:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:33.309 04:56:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:33.309 04:56:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.309 1+0 records in 00:13:33.309 1+0 records out 00:13:33.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545083 s, 7.5 MB/s 00:13:33.309 04:56:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.309 04:56:02 -- common/autotest_common.sh@874 -- # size=4096 00:13:33.309 04:56:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.309 04:56:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:33.309 04:56:02 -- common/autotest_common.sh@877 -- # return 0 00:13:33.309 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.309 04:56:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.309 04:56:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:33.567 /dev/nbd15 00:13:33.567 04:56:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:33.567 04:56:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:33.567 04:56:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:13:33.567 04:56:03 -- common/autotest_common.sh@857 -- # local i 00:13:33.567 04:56:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:33.567 04:56:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:33.567 04:56:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:13:33.567 04:56:03 -- common/autotest_common.sh@861 -- # break 00:13:33.567 04:56:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:33.567 04:56:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:33.567 04:56:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.567 1+0 records in 00:13:33.567 1+0 records out 00:13:33.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363246 s, 11.3 MB/s 00:13:33.567 04:56:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.567 04:56:03 -- common/autotest_common.sh@874 -- # size=4096 00:13:33.567 04:56:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.567 04:56:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:33.567 04:56:03 -- common/autotest_common.sh@877 -- # return 0 00:13:33.567 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.567 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.567 04:56:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:33.825 /dev/nbd2 00:13:33.825 04:56:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:33.825 04:56:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:33.825 04:56:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:13:33.825 04:56:03 -- common/autotest_common.sh@857 -- # local i 00:13:33.825 04:56:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:33.825 04:56:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:33.825 04:56:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:13:33.825 04:56:03 -- common/autotest_common.sh@861 -- # break 00:13:33.825 04:56:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:33.825 04:56:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:33.825 04:56:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.825 1+0 records in 00:13:33.825 1+0 records out 00:13:33.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572721 s, 7.2 MB/s 00:13:33.825 04:56:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.825 04:56:03 -- common/autotest_common.sh@874 -- # size=4096 00:13:33.825 04:56:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.825 04:56:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:33.825 04:56:03 -- common/autotest_common.sh@877 -- # return 0 00:13:33.825 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.825 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.825 04:56:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:34.083 /dev/nbd3 00:13:34.083 04:56:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:34.083 04:56:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:34.083 04:56:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:13:34.083 04:56:03 -- common/autotest_common.sh@857 -- # local i 00:13:34.083 04:56:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:34.083 04:56:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:34.083 04:56:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:13:34.083 04:56:03 -- common/autotest_common.sh@861 -- # break 00:13:34.083 04:56:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:34.083 04:56:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:34.083 04:56:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.083 1+0 records in 00:13:34.083 1+0 records out 00:13:34.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969639 s, 4.2 MB/s 00:13:34.083 04:56:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.083 04:56:03 -- common/autotest_common.sh@874 -- # size=4096 00:13:34.083 04:56:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.083 04:56:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:34.083 04:56:03 -- common/autotest_common.sh@877 -- # return 0 00:13:34.083 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.083 04:56:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.083 04:56:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:34.341 /dev/nbd4 00:13:34.341 04:56:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:34.341 04:56:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:34.341 04:56:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:13:34.341 04:56:04 -- common/autotest_common.sh@857 -- # local i 00:13:34.341 04:56:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:34.341 04:56:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:34.341 04:56:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:13:34.341 04:56:04 -- common/autotest_common.sh@861 -- # break 00:13:34.341 04:56:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:34.341 04:56:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:34.341 04:56:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.341 1+0 records in 00:13:34.341 1+0 records out 00:13:34.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537695 s, 7.6 MB/s 00:13:34.341 04:56:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.341 04:56:04 -- common/autotest_common.sh@874 -- # size=4096 00:13:34.341 04:56:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.341 04:56:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:34.341 04:56:04 -- common/autotest_common.sh@877 -- # return 0 00:13:34.341 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.341 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.341 04:56:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:34.599 /dev/nbd5 00:13:34.599 04:56:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:34.599 04:56:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:34.599 04:56:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:13:34.599 04:56:04 -- common/autotest_common.sh@857 -- # local i 00:13:34.599 04:56:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:34.599 04:56:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:34.599 04:56:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:13:34.599 04:56:04 -- common/autotest_common.sh@861 -- # break 00:13:34.599 04:56:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:34.599 04:56:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:34.599 04:56:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.599 1+0 records in 00:13:34.599 1+0 records out 00:13:34.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886191 s, 4.6 MB/s 00:13:34.599 04:56:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.599 04:56:04 -- common/autotest_common.sh@874 -- # size=4096 00:13:34.599 04:56:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.599 04:56:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:34.599 04:56:04 -- common/autotest_common.sh@877 -- # return 0 00:13:34.599 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.599 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.599 04:56:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:34.857 /dev/nbd6 00:13:35.115 04:56:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:35.115 04:56:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:35.115 04:56:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:13:35.115 04:56:04 -- common/autotest_common.sh@857 -- # local i 00:13:35.115 04:56:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:35.115 04:56:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:35.115 04:56:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:13:35.115 04:56:04 -- common/autotest_common.sh@861 -- # break 00:13:35.115 04:56:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:35.115 04:56:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:35.115 04:56:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.115 1+0 records in 00:13:35.115 1+0 records out 00:13:35.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078279 s, 5.2 MB/s 00:13:35.115 04:56:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.115 04:56:04 -- common/autotest_common.sh@874 -- # size=4096 00:13:35.115 04:56:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.115 04:56:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:35.115 04:56:04 -- common/autotest_common.sh@877 -- # return 0 00:13:35.115 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.115 04:56:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.115 04:56:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:35.373 /dev/nbd7 00:13:35.373 04:56:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:35.373 04:56:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:35.373 04:56:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:13:35.373 04:56:05 -- common/autotest_common.sh@857 -- # local i 00:13:35.373 04:56:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:35.373 04:56:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:35.373 04:56:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:13:35.373 04:56:05 -- common/autotest_common.sh@861 -- # break 00:13:35.373 04:56:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:35.373 04:56:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:35.373 04:56:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.373 1+0 records in 00:13:35.373 1+0 records out 00:13:35.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789304 s, 5.2 MB/s 00:13:35.373 04:56:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.373 04:56:05 -- common/autotest_common.sh@874 -- # size=4096 00:13:35.373 04:56:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.373 04:56:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:35.373 04:56:05 -- common/autotest_common.sh@877 -- # return 0 00:13:35.373 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.373 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.373 04:56:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:35.631 /dev/nbd8 00:13:35.631 04:56:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:35.631 04:56:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:35.631 04:56:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:13:35.631 04:56:05 -- common/autotest_common.sh@857 -- # local i 00:13:35.631 04:56:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:35.631 04:56:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:35.631 04:56:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:13:35.631 04:56:05 -- common/autotest_common.sh@861 -- # break 00:13:35.631 04:56:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:35.631 04:56:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:35.631 04:56:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.631 1+0 records in 00:13:35.631 1+0 records out 00:13:35.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867312 s, 4.7 MB/s 00:13:35.631 04:56:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.631 04:56:05 -- common/autotest_common.sh@874 -- # size=4096 00:13:35.631 04:56:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.631 04:56:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:35.631 04:56:05 -- common/autotest_common.sh@877 -- # return 0 00:13:35.631 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.631 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.631 04:56:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:35.889 /dev/nbd9 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:35.889 04:56:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:13:35.889 04:56:05 -- common/autotest_common.sh@857 -- # local i 00:13:35.889 04:56:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:35.889 04:56:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:35.889 04:56:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:13:35.889 04:56:05 -- common/autotest_common.sh@861 -- # break 00:13:35.889 04:56:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:35.889 04:56:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:35.889 04:56:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.889 1+0 records in 00:13:35.889 1+0 records out 00:13:35.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110337 s, 3.7 MB/s 00:13:35.889 04:56:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.889 04:56:05 -- common/autotest_common.sh@874 -- # size=4096 00:13:35.889 04:56:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.889 04:56:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:35.889 04:56:05 -- common/autotest_common.sh@877 -- # return 0 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.889 04:56:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.147 04:56:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd0", 00:13:36.147 "bdev_name": "Malloc0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd1", 00:13:36.147 "bdev_name": "Malloc1p0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd10", 00:13:36.147 "bdev_name": "Malloc1p1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd11", 00:13:36.147 "bdev_name": "Malloc2p0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd12", 00:13:36.147 "bdev_name": "Malloc2p1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd13", 00:13:36.147 "bdev_name": "Malloc2p2" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd14", 00:13:36.147 "bdev_name": "Malloc2p3" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd15", 00:13:36.147 "bdev_name": "Malloc2p4" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd2", 00:13:36.147 "bdev_name": "Malloc2p5" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd3", 00:13:36.147 "bdev_name": "Malloc2p6" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd4", 00:13:36.147 "bdev_name": "Malloc2p7" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd5", 00:13:36.147 "bdev_name": "TestPT" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd6", 00:13:36.147 "bdev_name": "raid0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd7", 00:13:36.147 "bdev_name": "concat0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd8", 00:13:36.147 "bdev_name": "raid1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd9", 00:13:36.147 "bdev_name": "AIO0" 00:13:36.147 } 00:13:36.147 ]' 00:13:36.147 04:56:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd0", 00:13:36.147 "bdev_name": "Malloc0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd1", 00:13:36.147 "bdev_name": "Malloc1p0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd10", 00:13:36.147 "bdev_name": "Malloc1p1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd11", 00:13:36.147 "bdev_name": "Malloc2p0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd12", 00:13:36.147 "bdev_name": "Malloc2p1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd13", 00:13:36.147 "bdev_name": "Malloc2p2" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd14", 00:13:36.147 "bdev_name": "Malloc2p3" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd15", 00:13:36.147 "bdev_name": "Malloc2p4" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd2", 00:13:36.147 "bdev_name": "Malloc2p5" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd3", 00:13:36.147 "bdev_name": "Malloc2p6" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd4", 00:13:36.147 "bdev_name": "Malloc2p7" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd5", 00:13:36.147 "bdev_name": "TestPT" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd6", 00:13:36.147 "bdev_name": "raid0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd7", 00:13:36.147 "bdev_name": "concat0" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd8", 00:13:36.147 "bdev_name": "raid1" 00:13:36.147 }, 00:13:36.147 { 00:13:36.147 "nbd_device": "/dev/nbd9", 00:13:36.147 "bdev_name": "AIO0" 00:13:36.147 } 00:13:36.147 ]' 00:13:36.147 04:56:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.147 04:56:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:36.147 /dev/nbd1 00:13:36.147 /dev/nbd10 00:13:36.147 /dev/nbd11 00:13:36.147 /dev/nbd12 00:13:36.147 /dev/nbd13 00:13:36.147 /dev/nbd14 00:13:36.147 /dev/nbd15 00:13:36.147 /dev/nbd2 00:13:36.147 /dev/nbd3 00:13:36.147 /dev/nbd4 00:13:36.147 /dev/nbd5 00:13:36.147 /dev/nbd6 00:13:36.147 /dev/nbd7 00:13:36.147 /dev/nbd8 00:13:36.147 /dev/nbd9' 00:13:36.147 04:56:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:36.147 /dev/nbd1 00:13:36.147 /dev/nbd10 00:13:36.147 /dev/nbd11 00:13:36.147 /dev/nbd12 00:13:36.147 /dev/nbd13 00:13:36.147 /dev/nbd14 00:13:36.147 /dev/nbd15 00:13:36.147 /dev/nbd2 00:13:36.147 /dev/nbd3 00:13:36.147 /dev/nbd4 00:13:36.147 /dev/nbd5 00:13:36.147 /dev/nbd6 00:13:36.147 /dev/nbd7 00:13:36.147 /dev/nbd8 00:13:36.147 /dev/nbd9' 00:13:36.147 04:56:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@65 -- # count=16 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@66 -- # echo 16 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@95 -- # count=16 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:36.405 256+0 records in 00:13:36.405 256+0 records out 00:13:36.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00991399 s, 106 MB/s 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:36.405 256+0 records in 00:13:36.405 256+0 records out 00:13:36.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142016 s, 7.4 MB/s 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.405 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:36.663 256+0 records in 00:13:36.663 256+0 records out 00:13:36.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154196 s, 6.8 MB/s 00:13:36.663 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.663 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:36.663 256+0 records in 00:13:36.663 256+0 records out 00:13:36.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145628 s, 7.2 MB/s 00:13:36.663 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.663 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:36.921 256+0 records in 00:13:36.921 256+0 records out 00:13:36.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153108 s, 6.8 MB/s 00:13:36.921 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.921 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:37.180 256+0 records in 00:13:37.180 256+0 records out 00:13:37.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151157 s, 6.9 MB/s 00:13:37.180 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.180 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:37.180 256+0 records in 00:13:37.180 256+0 records out 00:13:37.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146751 s, 7.1 MB/s 00:13:37.180 04:56:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.180 04:56:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:37.438 256+0 records in 00:13:37.438 256+0 records out 00:13:37.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150831 s, 7.0 MB/s 00:13:37.438 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.438 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:37.438 256+0 records in 00:13:37.438 256+0 records out 00:13:37.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15173 s, 6.9 MB/s 00:13:37.438 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.438 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:37.696 256+0 records in 00:13:37.696 256+0 records out 00:13:37.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15069 s, 7.0 MB/s 00:13:37.697 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.697 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:37.956 256+0 records in 00:13:37.956 256+0 records out 00:13:37.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14999 s, 7.0 MB/s 00:13:37.956 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.956 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:37.956 256+0 records in 00:13:37.956 256+0 records out 00:13:37.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150933 s, 6.9 MB/s 00:13:37.956 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.956 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:38.214 256+0 records in 00:13:38.214 256+0 records out 00:13:38.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15002 s, 7.0 MB/s 00:13:38.214 04:56:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.214 04:56:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:38.214 256+0 records in 00:13:38.214 256+0 records out 00:13:38.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152573 s, 6.9 MB/s 00:13:38.214 04:56:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.214 04:56:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:38.472 256+0 records in 00:13:38.472 256+0 records out 00:13:38.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152272 s, 6.9 MB/s 00:13:38.473 04:56:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.473 04:56:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:38.731 256+0 records in 00:13:38.732 256+0 records out 00:13:38.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158956 s, 6.6 MB/s 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:38.732 256+0 records in 00:13:38.732 256+0 records out 00:13:38.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207387 s, 5.1 MB/s 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.732 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.990 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@51 -- # local i 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.991 04:56:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@41 -- # break 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.249 04:56:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@41 -- # break 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.508 04:56:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@41 -- # break 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.794 04:56:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@41 -- # break 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.053 04:56:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@41 -- # break 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.312 04:56:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@41 -- # break 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.571 04:56:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@41 -- # break 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.139 04:56:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@41 -- # break 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.139 04:56:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@41 -- # break 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.398 04:56:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@41 -- # break 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.655 04:56:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@41 -- # break 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.913 04:56:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@41 -- # break 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.172 04:56:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@41 -- # break 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.430 04:56:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@41 -- # break 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.688 04:56:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@41 -- # break 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.946 04:56:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@41 -- # break 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.204 04:56:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.461 04:56:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:43.461 04:56:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:43.461 04:56:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.727 04:56:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:43.727 04:56:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:43.727 04:56:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@65 -- # true 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@65 -- # count=0 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@104 -- # count=0 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@109 -- # return 0 00:13:43.728 04:56:13 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:43.728 04:56:13 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:44.001 malloc_lvol_verify 00:13:44.001 04:56:13 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:44.001 1f66134e-84c9-4a0c-9bc6-b671bf77547c 00:13:44.259 04:56:13 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:44.259 7391a7dc-ed97-4aba-a455-de2f82a87b94 00:13:44.259 04:56:14 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:44.517 /dev/nbd0 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:44.517 mke2fs 1.46.5 (30-Dec-2021) 00:13:44.517 00:13:44.517 Filesystem too small for a journal 00:13:44.517 Discarding device blocks: 0/1024 done 00:13:44.517 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:44.517 00:13:44.517 Allocating group tables: 0/1 done 00:13:44.517 Writing inode tables: 0/1 done 00:13:44.517 Writing superblocks and filesystem accounting information: 0/1 done 00:13:44.517 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@51 -- # local i 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.517 04:56:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@41 -- # break 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.775 04:56:14 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:44.776 04:56:14 -- bdev/nbd_common.sh@147 -- # return 0 00:13:44.776 04:56:14 -- bdev/blockdev.sh@324 -- # killprocess 121200 00:13:44.776 04:56:14 -- common/autotest_common.sh@926 -- # '[' -z 121200 ']' 00:13:44.776 04:56:14 -- common/autotest_common.sh@930 -- # kill -0 121200 00:13:44.776 04:56:14 -- common/autotest_common.sh@931 -- # uname 00:13:44.776 04:56:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.776 04:56:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121200 00:13:44.776 04:56:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.776 04:56:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.776 04:56:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121200' 00:13:44.776 killing process with pid 121200 00:13:44.776 04:56:14 -- common/autotest_common.sh@945 -- # kill 121200 00:13:44.776 04:56:14 -- common/autotest_common.sh@950 -- # wait 121200 00:13:45.342 ************************************ 00:13:45.342 END TEST bdev_nbd 00:13:45.342 ************************************ 00:13:45.342 04:56:15 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:45.342 00:13:45.342 real 0m26.227s 00:13:45.342 user 0m36.788s 00:13:45.342 sys 0m9.767s 00:13:45.342 04:56:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.342 04:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.342 04:56:15 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:45.342 04:56:15 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:45.342 04:56:15 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:45.342 04:56:15 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:45.342 04:56:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.342 04:56:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.342 04:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.601 ************************************ 00:13:45.601 START TEST bdev_fio 00:13:45.601 ************************************ 00:13:45.601 04:56:15 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:45.601 04:56:15 -- bdev/blockdev.sh@329 -- # local env_context 00:13:45.601 04:56:15 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:45.601 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:45.601 04:56:15 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:45.601 04:56:15 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:45.601 04:56:15 -- bdev/blockdev.sh@337 -- # echo '' 00:13:45.601 04:56:15 -- bdev/blockdev.sh@337 -- # env_context= 00:13:45.601 04:56:15 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.601 04:56:15 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:45.601 04:56:15 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:45.601 04:56:15 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:45.601 04:56:15 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:45.601 04:56:15 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.601 04:56:15 -- common/autotest_common.sh@1280 -- # cat 00:13:45.601 04:56:15 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1293 -- # cat 00:13:45.601 04:56:15 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:45.601 04:56:15 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:45.601 04:56:15 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:45.601 04:56:15 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:45.601 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.601 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:45.602 04:56:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:45.602 04:56:15 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:45.602 04:56:15 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:45.602 04:56:15 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:45.602 04:56:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:45.602 04:56:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.602 04:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.602 ************************************ 00:13:45.602 START TEST bdev_fio_rw_verify 00:13:45.602 ************************************ 00:13:45.602 04:56:15 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:45.602 04:56:15 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:45.602 04:56:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:45.602 04:56:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:45.602 04:56:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:45.602 04:56:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.602 04:56:15 -- common/autotest_common.sh@1320 -- # shift 00:13:45.602 04:56:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:45.602 04:56:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:45.602 04:56:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.602 04:56:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:45.602 04:56:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:45.602 04:56:15 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:45.602 04:56:15 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:45.602 04:56:15 -- common/autotest_common.sh@1326 -- # break 00:13:45.602 04:56:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:45.602 04:56:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:45.860 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.860 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.860 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.860 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.860 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.860 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:45.861 fio-3.35 00:13:45.861 Starting 16 threads 00:13:58.055 00:13:58.055 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=122365: Sat Apr 27 04:56:26 2024 00:13:58.055 read: IOPS=83.5k, BW=326MiB/s (342MB/s)(3263MiB/10001msec) 00:13:58.055 slat (usec): min=2, max=40041, avg=30.79, stdev=393.27 00:13:58.055 clat (usec): min=9, max=40250, avg=257.90, stdev=1215.98 00:13:58.055 lat (usec): min=25, max=40271, avg=288.69, stdev=1277.53 00:13:58.055 clat percentiles (usec): 00:13:58.055 | 50.000th=[ 153], 99.000th=[ 635], 99.900th=[16319], 99.990th=[31327], 00:13:58.055 | 99.999th=[39584] 00:13:58.055 write: IOPS=132k, BW=517MiB/s (543MB/s)(5124MiB/9903msec); 0 zone resets 00:13:58.055 slat (usec): min=5, max=57494, avg=62.99, stdev=662.02 00:13:58.055 clat (usec): min=8, max=49575, avg=348.62, stdev=1485.64 00:13:58.055 lat (usec): min=34, max=58172, avg=411.61, stdev=1626.85 00:13:58.055 clat percentiles (usec): 00:13:58.055 | 50.000th=[ 198], 99.000th=[ 4146], 99.900th=[20055], 99.990th=[33817], 00:13:58.055 | 99.999th=[47973] 00:13:58.055 bw ( KiB/s): min=331650, max=795992, per=98.99%, avg=524497.84, stdev=8724.76, samples=305 00:13:58.055 iops : min=82912, max=198998, avg=131124.05, stdev=2181.20, samples=305 00:13:58.055 lat (usec) : 10=0.01%, 20=0.01%, 50=0.97%, 100=15.85%, 250=60.69% 00:13:58.055 lat (usec) : 500=19.71%, 750=1.42%, 1000=0.28% 00:13:58.055 lat (msec) : 2=0.11%, 4=0.09%, 10=0.25%, 20=0.56%, 50=0.08% 00:13:58.055 cpu : usr=55.34%, sys=2.04%, ctx=235612, majf=2, minf=101081 00:13:58.055 IO depths : 1=11.3%, 2=23.5%, 4=52.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.055 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.055 issued rwts: total=835326,1311825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:58.055 00:13:58.055 Run status group 0 (all jobs): 00:13:58.055 READ: bw=326MiB/s (342MB/s), 326MiB/s-326MiB/s (342MB/s-342MB/s), io=3263MiB (3421MB), run=10001-10001msec 00:13:58.055 WRITE: bw=517MiB/s (543MB/s), 517MiB/s-517MiB/s (543MB/s-543MB/s), io=5124MiB (5373MB), run=9903-9903msec 00:13:58.055 ----------------------------------------------------- 00:13:58.055 Suppressions used: 00:13:58.055 count bytes template 00:13:58.055 16 140 /usr/src/fio/parse.c 00:13:58.055 11537 1107552 /usr/src/fio/iolog.c 00:13:58.055 1 904 libcrypto.so 00:13:58.055 ----------------------------------------------------- 00:13:58.055 00:13:58.055 ************************************ 00:13:58.055 END TEST bdev_fio_rw_verify 00:13:58.055 ************************************ 00:13:58.055 00:13:58.055 real 0m12.339s 00:13:58.055 user 1m31.611s 00:13:58.055 sys 0m4.292s 00:13:58.055 04:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.055 04:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:58.055 04:56:27 -- bdev/blockdev.sh@348 -- # rm -f 00:13:58.055 04:56:27 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:58.055 04:56:27 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:58.055 04:56:27 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:58.055 04:56:27 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:58.055 04:56:27 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:58.055 04:56:27 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:58.055 04:56:27 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:58.055 04:56:27 -- common/autotest_common.sh@1280 -- # cat 00:13:58.055 04:56:27 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:58.055 04:56:27 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:58.055 04:56:27 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:58.056 04:56:27 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "95e49f3e-60d6-44e6-affc-a590bd6835cb"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "95e49f3e-60d6-44e6-affc-a590bd6835cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "59942da2-35ae-52c3-ad6a-fe77b4482ba8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "59942da2-35ae-52c3-ad6a-fe77b4482ba8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "22d02607-ab96-5523-a0e2-3424145fc97d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "22d02607-ab96-5523-a0e2-3424145fc97d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ef351d09-fab9-55f6-9ece-3caed4fc3950"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef351d09-fab9-55f6-9ece-3caed4fc3950",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "57fbc074-46c1-5ab0-95ac-6432853f30f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57fbc074-46c1-5ab0-95ac-6432853f30f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c0ae75c1-7251-5cd9-b52f-eebf2870f494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0ae75c1-7251-5cd9-b52f-eebf2870f494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "696b5c80-4876-5603-af17-09088df3ab99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "696b5c80-4876-5603-af17-09088df3ab99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4992917e-b718-5ab3-a772-f90037bb6ebe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4992917e-b718-5ab3-a772-f90037bb6ebe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bf3a9708-5e7e-5255-9022-1366db6c987a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf3a9708-5e7e-5255-9022-1366db6c987a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6872e709-d0ee-4841-b0b7-cd843570b946"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "53684529-dd64-4f7c-a81f-f09420dc3fc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "870e42cb-2b67-466f-834d-09329177486b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7cbc1094-3db2-4000-aa31-16e00824d4f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7e4b16a0-5d03-4f03-98a5-160f0a01f266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5d51727b-b91f-4b0b-a3e7-896c6c588b5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "437a53d9-7e44-462a-83bd-f454a64cf843"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "db0ee6b6-88d2-4fd5-9247-6d3e3875a041",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6085a43f-9a4c-4692-bc0f-65c10c8f34a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98fb0ece-b8c5-4225-88cd-39cb40366a0e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98fb0ece-b8c5-4225-88cd-39cb40366a0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:58.056 04:56:27 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:58.056 Malloc1p0 00:13:58.056 Malloc1p1 00:13:58.056 Malloc2p0 00:13:58.056 Malloc2p1 00:13:58.056 Malloc2p2 00:13:58.056 Malloc2p3 00:13:58.056 Malloc2p4 00:13:58.056 Malloc2p5 00:13:58.056 Malloc2p6 00:13:58.057 Malloc2p7 00:13:58.057 TestPT 00:13:58.057 raid0 00:13:58.057 concat0 ]] 00:13:58.057 04:56:27 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "95e49f3e-60d6-44e6-affc-a590bd6835cb"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "95e49f3e-60d6-44e6-affc-a590bd6835cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "59942da2-35ae-52c3-ad6a-fe77b4482ba8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "59942da2-35ae-52c3-ad6a-fe77b4482ba8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "22d02607-ab96-5523-a0e2-3424145fc97d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "22d02607-ab96-5523-a0e2-3424145fc97d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ef351d09-fab9-55f6-9ece-3caed4fc3950"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef351d09-fab9-55f6-9ece-3caed4fc3950",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "57fbc074-46c1-5ab0-95ac-6432853f30f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57fbc074-46c1-5ab0-95ac-6432853f30f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5e0f953d-7e58-5cdf-a17b-3a0ebd6c8f27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b0d0ce52-e92a-5070-89c1-d5faf36ef3c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c0ae75c1-7251-5cd9-b52f-eebf2870f494"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c0ae75c1-7251-5cd9-b52f-eebf2870f494",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "696b5c80-4876-5603-af17-09088df3ab99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "696b5c80-4876-5603-af17-09088df3ab99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "4992917e-b718-5ab3-a772-f90037bb6ebe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4992917e-b718-5ab3-a772-f90037bb6ebe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "bf3a9708-5e7e-5255-9022-1366db6c987a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bf3a9708-5e7e-5255-9022-1366db6c987a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9bcbc75-25f6-5d3d-8dd4-a8c0a7061d19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6872e709-d0ee-4841-b0b7-cd843570b946"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6872e709-d0ee-4841-b0b7-cd843570b946",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "53684529-dd64-4f7c-a81f-f09420dc3fc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "870e42cb-2b67-466f-834d-09329177486b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7cbc1094-3db2-4000-aa31-16e00824d4f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7cbc1094-3db2-4000-aa31-16e00824d4f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7e4b16a0-5d03-4f03-98a5-160f0a01f266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5d51727b-b91f-4b0b-a3e7-896c6c588b5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "437a53d9-7e44-462a-83bd-f454a64cf843"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "437a53d9-7e44-462a-83bd-f454a64cf843",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "db0ee6b6-88d2-4fd5-9247-6d3e3875a041",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "6085a43f-9a4c-4692-bc0f-65c10c8f34a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98fb0ece-b8c5-4225-88cd-39cb40366a0e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98fb0ece-b8c5-4225-88cd-39cb40366a0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:58.058 04:56:27 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:58.058 04:56:27 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:58.058 04:56:27 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:58.058 04:56:27 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:58.058 04:56:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:58.058 04:56:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:58.058 04:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:58.058 ************************************ 00:13:58.058 START TEST bdev_fio_trim 00:13:58.058 ************************************ 00:13:58.058 04:56:27 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:58.058 04:56:27 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:58.058 04:56:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:58.058 04:56:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:58.058 04:56:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:58.058 04:56:27 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.058 04:56:27 -- common/autotest_common.sh@1320 -- # shift 00:13:58.058 04:56:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:58.058 04:56:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:58.058 04:56:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.058 04:56:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:58.058 04:56:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:58.058 04:56:27 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:58.058 04:56:27 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:58.058 04:56:27 -- common/autotest_common.sh@1326 -- # break 00:13:58.058 04:56:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:58.058 04:56:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:58.316 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.316 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:58.317 fio-3.35 00:13:58.317 Starting 14 threads 00:14:10.514 00:14:10.514 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=122578: Sat Apr 27 04:56:39 2024 00:14:10.514 write: IOPS=172k, BW=671MiB/s (703MB/s)(6711MiB/10003msec); 0 zone resets 00:14:10.514 slat (usec): min=2, max=37650, avg=28.45, stdev=344.36 00:14:10.514 clat (usec): min=27, max=32212, avg=213.58, stdev=1022.97 00:14:10.514 lat (usec): min=39, max=37878, avg=242.02, stdev=1078.68 00:14:10.514 clat percentiles (usec): 00:14:10.514 | 50.000th=[ 137], 99.000th=[ 498], 99.900th=[16188], 99.990th=[18220], 00:14:10.514 | 99.999th=[28181] 00:14:10.514 bw ( KiB/s): min=486773, max=963128, per=100.00%, avg=687348.32, stdev=11841.60, samples=266 00:14:10.514 iops : min=121693, max=240782, avg=171837.00, stdev=2960.40, samples=266 00:14:10.514 trim: IOPS=172k, BW=671MiB/s (703MB/s)(6711MiB/10003msec); 0 zone resets 00:14:10.514 slat (usec): min=5, max=28078, avg=20.41, stdev=289.72 00:14:10.514 clat (usec): min=4, max=37878, avg=215.85, stdev=947.60 00:14:10.514 lat (usec): min=15, max=37892, avg=236.26, stdev=990.86 00:14:10.514 clat percentiles (usec): 00:14:10.514 | 50.000th=[ 153], 99.000th=[ 310], 99.900th=[16188], 99.990th=[17957], 00:14:10.514 | 99.999th=[27919] 00:14:10.514 bw ( KiB/s): min=486781, max=963192, per=100.00%, avg=687351.68, stdev=11841.90, samples=266 00:14:10.514 iops : min=121695, max=240798, avg=171837.84, stdev=2960.48, samples=266 00:14:10.514 lat (usec) : 10=0.10%, 20=0.24%, 50=0.92%, 100=18.12%, 250=76.83% 00:14:10.514 lat (usec) : 500=3.07%, 750=0.22%, 1000=0.01% 00:14:10.514 lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.40%, 50=0.01% 00:14:10.514 cpu : usr=69.11%, sys=0.28%, ctx=170693, majf=0, minf=9043 00:14:10.514 IO depths : 1=12.3%, 2=24.7%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:10.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.514 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.514 issued rwts: total=0,1718003,1718007,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.514 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:10.514 00:14:10.514 Run status group 0 (all jobs): 00:14:10.514 WRITE: bw=671MiB/s (703MB/s), 671MiB/s-671MiB/s (703MB/s-703MB/s), io=6711MiB (7037MB), run=10003-10003msec 00:14:10.514 TRIM: bw=671MiB/s (703MB/s), 671MiB/s-671MiB/s (703MB/s-703MB/s), io=6711MiB (7037MB), run=10003-10003msec 00:14:10.514 ----------------------------------------------------- 00:14:10.514 Suppressions used: 00:14:10.514 count bytes template 00:14:10.514 14 129 /usr/src/fio/parse.c 00:14:10.514 1 904 libcrypto.so 00:14:10.514 ----------------------------------------------------- 00:14:10.514 00:14:10.514 00:14:10.514 real 0m11.978s 00:14:10.514 user 1m39.709s 00:14:10.514 sys 0m1.176s 00:14:10.514 04:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.514 ************************************ 00:14:10.514 04:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:10.514 END TEST bdev_fio_trim 00:14:10.514 ************************************ 00:14:10.514 04:56:39 -- bdev/blockdev.sh@366 -- # rm -f 00:14:10.514 04:56:39 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:10.514 /home/vagrant/spdk_repo/spdk 00:14:10.514 04:56:39 -- bdev/blockdev.sh@368 -- # popd 00:14:10.514 04:56:39 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:14:10.514 00:14:10.514 real 0m24.692s 00:14:10.514 user 3m11.563s 00:14:10.514 sys 0m5.545s 00:14:10.514 04:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.514 04:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:10.514 ************************************ 00:14:10.514 END TEST bdev_fio 00:14:10.515 ************************************ 00:14:10.515 04:56:39 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:10.515 04:56:39 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:10.515 04:56:39 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:14:10.515 04:56:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.515 04:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:10.515 ************************************ 00:14:10.515 START TEST bdev_verify 00:14:10.515 ************************************ 00:14:10.515 04:56:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:10.515 [2024-04-27 04:56:40.055146] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:10.515 [2024-04-27 04:56:40.055949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122753 ] 00:14:10.515 [2024-04-27 04:56:40.219900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.515 [2024-04-27 04:56:40.341517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.515 [2024-04-27 04:56:40.341526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.773 [2024-04-27 04:56:40.535964] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:10.773 [2024-04-27 04:56:40.536424] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:10.773 [2024-04-27 04:56:40.543877] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:10.773 [2024-04-27 04:56:40.544086] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:10.773 [2024-04-27 04:56:40.551940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:10.773 [2024-04-27 04:56:40.552140] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:10.773 [2024-04-27 04:56:40.552344] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:11.031 [2024-04-27 04:56:40.668237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:11.031 [2024-04-27 04:56:40.668804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.031 [2024-04-27 04:56:40.669041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:11.031 [2024-04-27 04:56:40.669203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.031 [2024-04-27 04:56:40.672442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.031 [2024-04-27 04:56:40.672649] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:11.289 Running I/O for 5 seconds... 00:14:16.553 00:14:16.553 Latency(us) 00:14:16.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.553 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x1000 00:14:16.553 Malloc0 : 5.18 1500.20 5.86 0.00 0.00 84570.84 2308.65 274536.26 00:14:16.553 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x1000 length 0x1000 00:14:16.553 Malloc0 : 5.18 1473.56 5.76 0.00 0.00 86092.13 1966.08 341263.83 00:14:16.553 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x800 00:14:16.553 Malloc1p0 : 5.18 1046.76 4.09 0.00 0.00 121104.46 4438.57 181117.67 00:14:16.553 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x800 length 0x800 00:14:16.553 Malloc1p0 : 5.18 1046.75 4.09 0.00 0.00 121132.70 4468.36 180164.42 00:14:16.553 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x800 00:14:16.553 Malloc1p1 : 5.18 1046.14 4.09 0.00 0.00 120959.70 3932.16 178257.92 00:14:16.553 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x800 length 0x800 00:14:16.553 Malloc1p1 : 5.18 1046.13 4.09 0.00 0.00 120982.31 3991.74 176351.42 00:14:16.553 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x200 00:14:16.553 Malloc2p0 : 5.19 1045.54 4.08 0.00 0.00 120803.57 3991.74 174444.92 00:14:16.553 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x200 length 0x200 00:14:16.553 Malloc2p0 : 5.19 1045.53 4.08 0.00 0.00 120822.22 3991.74 172538.41 00:14:16.553 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x200 00:14:16.553 Malloc2p1 : 5.19 1044.94 4.08 0.00 0.00 120678.18 3961.95 170631.91 00:14:16.553 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x200 length 0x200 00:14:16.553 Malloc2p1 : 5.19 1044.93 4.08 0.00 0.00 120692.96 4021.53 168725.41 00:14:16.553 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x200 00:14:16.553 Malloc2p2 : 5.19 1044.35 4.08 0.00 0.00 120499.49 4825.83 166818.91 00:14:16.553 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x200 length 0x200 00:14:16.553 Malloc2p2 : 5.19 1044.34 4.08 0.00 0.00 120540.81 4736.47 163959.16 00:14:16.553 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x200 00:14:16.553 Malloc2p3 : 5.20 1043.74 4.08 0.00 0.00 120304.15 4557.73 161099.40 00:14:16.553 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x200 length 0x200 00:14:16.553 Malloc2p3 : 5.20 1043.73 4.08 0.00 0.00 120349.62 4498.15 159192.90 00:14:16.553 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.553 Verification LBA range: start 0x0 length 0x200 00:14:16.553 Malloc2p4 : 5.20 1043.13 4.07 0.00 0.00 120147.13 4468.36 156333.15 00:14:16.553 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x200 length 0x200 00:14:16.554 Malloc2p4 : 5.20 1043.13 4.07 0.00 0.00 120181.28 4498.15 154426.65 00:14:16.554 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x200 00:14:16.554 Malloc2p5 : 5.20 1042.50 4.07 0.00 0.00 119974.96 4527.94 152520.15 00:14:16.554 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x200 length 0x200 00:14:16.554 Malloc2p5 : 5.20 1042.49 4.07 0.00 0.00 120017.77 4557.73 150613.64 00:14:16.554 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x200 00:14:16.554 Malloc2p6 : 5.21 1041.92 4.07 0.00 0.00 119794.13 4170.47 148707.14 00:14:16.554 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x200 length 0x200 00:14:16.554 Malloc2p6 : 5.21 1041.91 4.07 0.00 0.00 119855.78 4200.26 146800.64 00:14:16.554 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x200 00:14:16.554 Malloc2p7 : 5.22 1055.41 4.12 0.00 0.00 118895.98 4051.32 144894.14 00:14:16.554 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x200 length 0x200 00:14:16.554 Malloc2p7 : 5.21 1041.33 4.07 0.00 0.00 119697.69 4140.68 142987.64 00:14:16.554 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x1000 00:14:16.554 TestPT : 5.22 1040.52 4.06 0.00 0.00 120337.64 9115.46 145847.39 00:14:16.554 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x1000 length 0x1000 00:14:16.554 TestPT : 5.22 1026.90 4.01 0.00 0.00 121973.94 7745.16 212574.95 00:14:16.554 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x2000 00:14:16.554 raid0 : 5.23 1054.34 4.12 0.00 0.00 118513.27 4498.15 135361.63 00:14:16.554 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x2000 length 0x2000 00:14:16.554 raid0 : 5.22 1054.90 4.12 0.00 0.00 118483.54 4438.57 125829.12 00:14:16.554 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x2000 00:14:16.554 concat0 : 5.23 1053.79 4.12 0.00 0.00 118359.81 4230.05 130595.37 00:14:16.554 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x2000 length 0x2000 00:14:16.554 concat0 : 5.23 1054.36 4.12 0.00 0.00 118306.35 4230.05 121539.49 00:14:16.554 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x1000 00:14:16.554 raid1 : 5.23 1053.24 4.11 0.00 0.00 118196.19 4676.89 125829.12 00:14:16.554 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x1000 length 0x1000 00:14:16.554 raid1 : 5.23 1053.81 4.12 0.00 0.00 118165.75 4676.89 117726.49 00:14:16.554 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x0 length 0x4e2 00:14:16.554 AIO0 : 5.23 1052.81 4.11 0.00 0.00 117925.44 10545.34 113913.48 00:14:16.554 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.554 Verification LBA range: start 0x4e2 length 0x4e2 00:14:16.554 AIO0 : 5.23 1053.29 4.11 0.00 0.00 117853.00 11379.43 102474.47 00:14:16.554 =================================================================================================================== 00:14:16.554 Total : 34366.42 134.24 0.00 0.00 116871.12 1966.08 341263.83 00:14:17.121 00:14:17.121 real 0m6.849s 00:14:17.121 user 0m11.480s 00:14:17.121 sys 0m0.678s 00:14:17.121 04:56:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.121 04:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:17.121 ************************************ 00:14:17.121 END TEST bdev_verify 00:14:17.121 ************************************ 00:14:17.121 04:56:46 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:17.121 04:56:46 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:14:17.121 04:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.121 04:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:17.121 ************************************ 00:14:17.121 START TEST bdev_verify_big_io 00:14:17.121 ************************************ 00:14:17.121 04:56:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:17.121 [2024-04-27 04:56:46.953357] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:17.121 [2024-04-27 04:56:46.953683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122857 ] 00:14:17.380 [2024-04-27 04:56:47.128215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.380 [2024-04-27 04:56:47.228851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.380 [2024-04-27 04:56:47.228851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.638 [2024-04-27 04:56:47.414699] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:17.638 [2024-04-27 04:56:47.414875] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:17.638 [2024-04-27 04:56:47.422604] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:17.638 [2024-04-27 04:56:47.422748] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:17.638 [2024-04-27 04:56:47.430702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:17.638 [2024-04-27 04:56:47.430853] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:17.638 [2024-04-27 04:56:47.430908] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:17.897 [2024-04-27 04:56:47.551292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:17.897 [2024-04-27 04:56:47.551500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.897 [2024-04-27 04:56:47.551570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:17.897 [2024-04-27 04:56:47.551613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.897 [2024-04-27 04:56:47.554640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.897 [2024-04-27 04:56:47.554703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:17.897 [2024-04-27 04:56:47.759895] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.761408] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.763664] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.765920] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.767353] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.769637] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.771217] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.773492] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.774962] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.777295] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.778718] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.780958] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.782399] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.784607] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.786985] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:17.897 [2024-04-27 04:56:47.788395] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:18.155 [2024-04-27 04:56:47.826360] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:18.155 [2024-04-27 04:56:47.829641] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:18.155 Running I/O for 5 seconds... 00:14:24.716 00:14:24.716 Latency(us) 00:14:24.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.716 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.716 Verification LBA range: start 0x0 length 0x100 00:14:24.716 Malloc0 : 5.77 325.05 20.32 0.00 0.00 386952.73 24784.52 1044763.00 00:14:24.716 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.716 Verification LBA range: start 0x100 length 0x100 00:14:24.716 Malloc0 : 5.76 306.40 19.15 0.00 0.00 410597.94 22282.24 1227787.17 00:14:24.716 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.716 Verification LBA range: start 0x0 length 0x80 00:14:24.716 Malloc1p0 : 5.89 183.18 11.45 0.00 0.00 669010.04 50522.30 1273543.21 00:14:24.716 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.716 Verification LBA range: start 0x80 length 0x80 00:14:24.716 Malloc1p0 : 5.76 234.22 14.64 0.00 0.00 530357.30 50045.67 1105771.05 00:14:24.717 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x80 00:14:24.717 Malloc1p1 : 6.01 111.20 6.95 0.00 0.00 1078365.45 50760.61 2287802.18 00:14:24.717 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x80 length 0x80 00:14:24.717 Malloc1p1 : 5.97 111.82 6.99 0.00 0.00 1071830.76 47662.55 2364062.25 00:14:24.717 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p0 : 5.78 60.43 3.78 0.00 0.00 495076.42 7983.48 831234.79 00:14:24.717 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p0 : 5.77 60.52 3.78 0.00 0.00 494954.28 7536.64 713031.68 00:14:24.717 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p1 : 5.78 60.41 3.78 0.00 0.00 493082.35 8698.41 815982.78 00:14:24.717 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p1 : 5.77 60.50 3.78 0.00 0.00 492831.66 8281.37 693966.66 00:14:24.717 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p2 : 5.78 60.39 3.77 0.00 0.00 490773.05 8817.57 800730.76 00:14:24.717 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p2 : 5.77 60.47 3.78 0.00 0.00 490861.40 8698.41 682527.65 00:14:24.717 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p3 : 5.78 60.36 3.77 0.00 0.00 488493.80 8519.68 785478.75 00:14:24.717 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p3 : 5.77 60.46 3.78 0.00 0.00 488532.30 9115.46 663462.63 00:14:24.717 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p4 : 5.78 60.35 3.77 0.00 0.00 486194.50 9889.98 766413.73 00:14:24.717 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p4 : 5.77 60.44 3.78 0.00 0.00 486319.11 8221.79 652023.62 00:14:24.717 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p5 : 5.78 60.33 3.77 0.00 0.00 483799.76 8102.63 747348.71 00:14:24.717 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p5 : 5.78 60.43 3.78 0.00 0.00 484040.29 7685.59 636771.61 00:14:24.717 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p6 : 5.79 60.31 3.77 0.00 0.00 481812.00 8638.84 735909.70 00:14:24.717 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p6 : 5.78 60.41 3.78 0.00 0.00 481940.72 8281.37 625332.60 00:14:24.717 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x20 00:14:24.717 Malloc2p7 : 5.79 60.30 3.77 0.00 0.00 479518.41 9294.20 716844.68 00:14:24.717 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x20 length 0x20 00:14:24.717 Malloc2p7 : 5.78 60.38 3.77 0.00 0.00 479903.62 8102.63 610080.58 00:14:24.717 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x100 00:14:24.717 TestPT : 6.05 110.43 6.90 0.00 0.00 1017136.02 51237.24 2242046.14 00:14:24.717 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x100 length 0x100 00:14:24.717 TestPT : 5.93 102.03 6.38 0.00 0.00 1114496.85 61484.68 2242046.14 00:14:24.717 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x200 00:14:24.717 raid0 : 6.01 115.61 7.23 0.00 0.00 963119.80 49092.42 2226794.12 00:14:24.717 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x200 length 0x200 00:14:24.717 raid0 : 5.98 116.22 7.26 0.00 0.00 969871.42 49330.73 2333558.23 00:14:24.717 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x200 00:14:24.717 concat0 : 6.01 120.41 7.53 0.00 0.00 908813.98 37415.10 2211542.11 00:14:24.717 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x200 length 0x200 00:14:24.717 concat0 : 5.98 116.20 7.26 0.00 0.00 949454.54 46947.61 2333558.23 00:14:24.717 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x100 00:14:24.717 raid1 : 6.00 132.56 8.29 0.00 0.00 815517.98 31695.59 2211542.11 00:14:24.717 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x100 length 0x100 00:14:24.717 raid1 : 5.99 126.65 7.92 0.00 0.00 858966.94 20256.58 2318306.21 00:14:24.717 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x0 length 0x4e 00:14:24.717 AIO0 : 6.01 140.59 8.79 0.00 0.00 462109.66 3455.53 1281169.22 00:14:24.717 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:14:24.717 Verification LBA range: start 0x4e length 0x4e 00:14:24.717 AIO0 : 5.98 138.54 8.66 0.00 0.00 475490.74 4140.68 1334551.27 00:14:24.717 =================================================================================================================== 00:14:24.717 Total : 3457.59 216.10 0.00 0.00 647611.04 3455.53 2364062.25 00:14:24.717 00:14:24.717 real 0m7.701s 00:14:24.717 user 0m13.962s 00:14:24.717 sys 0m0.576s 00:14:24.717 04:56:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.717 ************************************ 00:14:24.717 04:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.717 END TEST bdev_verify_big_io 00:14:24.717 ************************************ 00:14:24.976 04:56:54 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.976 04:56:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:24.976 04:56:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:24.976 04:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.976 ************************************ 00:14:24.976 START TEST bdev_write_zeroes 00:14:24.976 ************************************ 00:14:24.977 04:56:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.977 [2024-04-27 04:56:54.706955] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:24.977 [2024-04-27 04:56:54.707313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122977 ] 00:14:25.235 [2024-04-27 04:56:54.889521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.235 [2024-04-27 04:56:54.998181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.494 [2024-04-27 04:56:55.179301] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:25.494 [2024-04-27 04:56:55.179433] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:25.494 [2024-04-27 04:56:55.187205] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:25.494 [2024-04-27 04:56:55.187334] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:25.494 [2024-04-27 04:56:55.195286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:25.494 [2024-04-27 04:56:55.195390] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:25.494 [2024-04-27 04:56:55.195454] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:25.494 [2024-04-27 04:56:55.302950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:25.494 [2024-04-27 04:56:55.303125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.494 [2024-04-27 04:56:55.303200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:25.494 [2024-04-27 04:56:55.303234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.494 [2024-04-27 04:56:55.306023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.494 [2024-04-27 04:56:55.306103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:25.752 Running I/O for 1 seconds... 00:14:27.137 00:14:27.137 Latency(us) 00:14:27.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.137 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc0 : 1.05 5265.09 20.57 0.00 0.00 24292.53 856.44 45279.42 00:14:27.137 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc1p0 : 1.05 5258.41 20.54 0.00 0.00 24274.88 1228.80 44087.85 00:14:27.137 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc1p1 : 1.05 5252.45 20.52 0.00 0.00 24239.30 1005.38 42896.29 00:14:27.137 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p0 : 1.05 5246.40 20.49 0.00 0.00 24219.02 997.93 42181.35 00:14:27.137 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p1 : 1.05 5240.38 20.47 0.00 0.00 24197.77 1012.83 41466.41 00:14:27.137 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p2 : 1.05 5234.46 20.45 0.00 0.00 24168.17 990.49 40751.48 00:14:27.137 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p3 : 1.05 5228.55 20.42 0.00 0.00 24144.09 1072.41 39798.23 00:14:27.137 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p4 : 1.05 5222.77 20.40 0.00 0.00 24122.00 990.49 39083.29 00:14:27.137 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.137 Malloc2p5 : 1.06 5216.70 20.38 0.00 0.00 24097.90 1035.17 38130.04 00:14:27.137 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 Malloc2p6 : 1.06 5210.66 20.35 0.00 0.00 24069.92 983.04 37176.79 00:14:27.138 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 Malloc2p7 : 1.06 5204.91 20.33 0.00 0.00 24046.92 1087.30 35985.22 00:14:27.138 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 TestPT : 1.06 5198.86 20.31 0.00 0.00 24014.95 1184.12 34793.66 00:14:27.138 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 raid0 : 1.06 5192.24 20.28 0.00 0.00 23968.88 1854.37 32887.16 00:14:27.138 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 concat0 : 1.06 5185.70 20.26 0.00 0.00 23911.82 1660.74 31218.97 00:14:27.138 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 raid1 : 1.06 5177.33 20.22 0.00 0.00 23839.53 2800.17 29193.31 00:14:27.138 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:27.138 AIO0 : 1.06 5170.40 20.20 0.00 0.00 23740.98 1623.51 29074.15 00:14:27.138 =================================================================================================================== 00:14:27.138 Total : 83505.28 326.19 0.00 0.00 24084.30 856.44 45279.42 00:14:27.409 00:14:27.409 real 0m2.556s 00:14:27.409 user 0m1.893s 00:14:27.409 sys 0m0.478s 00:14:27.409 04:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.409 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:27.409 ************************************ 00:14:27.409 END TEST bdev_write_zeroes 00:14:27.409 ************************************ 00:14:27.409 04:56:57 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:27.409 04:56:57 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:27.409 04:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.409 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:27.409 ************************************ 00:14:27.409 START TEST bdev_json_nonenclosed 00:14:27.409 ************************************ 00:14:27.409 04:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:27.667 [2024-04-27 04:56:57.312735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:27.667 [2024-04-27 04:56:57.313312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123034 ] 00:14:27.667 [2024-04-27 04:56:57.483251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.925 [2024-04-27 04:56:57.591334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.925 [2024-04-27 04:56:57.591615] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:27.925 [2024-04-27 04:56:57.591677] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:27.925 00:14:27.925 real 0m0.527s 00:14:27.925 user 0m0.289s 00:14:27.925 sys 0m0.129s 00:14:27.925 04:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.925 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:27.925 ************************************ 00:14:27.925 END TEST bdev_json_nonenclosed 00:14:27.925 ************************************ 00:14:27.925 04:56:57 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:27.925 04:56:57 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:27.925 04:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.925 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:28.184 ************************************ 00:14:28.184 START TEST bdev_json_nonarray 00:14:28.184 ************************************ 00:14:28.184 04:56:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:28.184 [2024-04-27 04:56:57.883553] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:28.184 [2024-04-27 04:56:57.883824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123058 ] 00:14:28.184 [2024-04-27 04:56:58.054701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.443 [2024-04-27 04:56:58.162193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.443 [2024-04-27 04:56:58.162490] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:28.443 [2024-04-27 04:56:58.162559] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:28.443 00:14:28.443 real 0m0.496s 00:14:28.443 user 0m0.262s 00:14:28.443 sys 0m0.127s 00:14:28.443 04:56:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.443 ************************************ 00:14:28.443 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.443 END TEST bdev_json_nonarray 00:14:28.443 ************************************ 00:14:28.702 04:56:58 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:14:28.702 04:56:58 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:14:28.702 04:56:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.702 04:56:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.702 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.702 ************************************ 00:14:28.702 START TEST bdev_qos 00:14:28.702 ************************************ 00:14:28.702 04:56:58 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:14:28.702 04:56:58 -- bdev/blockdev.sh@444 -- # QOS_PID=123096 00:14:28.702 Process qos testing pid: 123096 00:14:28.702 04:56:58 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 123096' 00:14:28.702 04:56:58 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:28.702 04:56:58 -- bdev/blockdev.sh@447 -- # waitforlisten 123096 00:14:28.702 04:56:58 -- common/autotest_common.sh@819 -- # '[' -z 123096 ']' 00:14:28.702 04:56:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.702 04:56:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.702 04:56:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.702 04:56:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.702 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.702 04:56:58 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:28.702 [2024-04-27 04:56:58.422481] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:28.702 [2024-04-27 04:56:58.422964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123096 ] 00:14:28.961 [2024-04-27 04:56:58.595958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.961 [2024-04-27 04:56:58.709248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.529 04:56:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:29.529 04:56:59 -- common/autotest_common.sh@852 -- # return 0 00:14:29.529 04:56:59 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:29.529 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.529 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.788 Malloc_0 00:14:29.788 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.788 04:56:59 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:14:29.788 04:56:59 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:14:29.788 04:56:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.788 04:56:59 -- common/autotest_common.sh@889 -- # local i 00:14:29.788 04:56:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.788 04:56:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.788 04:56:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.788 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.788 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.788 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.788 04:56:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:29.788 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.788 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.788 [ 00:14:29.788 { 00:14:29.788 "name": "Malloc_0", 00:14:29.788 "aliases": [ 00:14:29.788 "5c922b6d-8aaf-425a-bf26-e621c3426959" 00:14:29.788 ], 00:14:29.788 "product_name": "Malloc disk", 00:14:29.788 "block_size": 512, 00:14:29.788 "num_blocks": 262144, 00:14:29.788 "uuid": "5c922b6d-8aaf-425a-bf26-e621c3426959", 00:14:29.788 "assigned_rate_limits": { 00:14:29.788 "rw_ios_per_sec": 0, 00:14:29.789 "rw_mbytes_per_sec": 0, 00:14:29.789 "r_mbytes_per_sec": 0, 00:14:29.789 "w_mbytes_per_sec": 0 00:14:29.789 }, 00:14:29.789 "claimed": false, 00:14:29.789 "zoned": false, 00:14:29.789 "supported_io_types": { 00:14:29.789 "read": true, 00:14:29.789 "write": true, 00:14:29.789 "unmap": true, 00:14:29.789 "write_zeroes": true, 00:14:29.789 "flush": true, 00:14:29.789 "reset": true, 00:14:29.789 "compare": false, 00:14:29.789 "compare_and_write": false, 00:14:29.789 "abort": true, 00:14:29.789 "nvme_admin": false, 00:14:29.789 "nvme_io": false 00:14:29.789 }, 00:14:29.789 "memory_domains": [ 00:14:29.789 { 00:14:29.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.789 "dma_device_type": 2 00:14:29.789 } 00:14:29.789 ], 00:14:29.789 "driver_specific": {} 00:14:29.789 } 00:14:29.789 ] 00:14:29.789 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.789 04:56:59 -- common/autotest_common.sh@895 -- # return 0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:29.789 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.789 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 Null_1 00:14:29.789 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.789 04:56:59 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:14:29.789 04:56:59 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:14:29.789 04:56:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.789 04:56:59 -- common/autotest_common.sh@889 -- # local i 00:14:29.789 04:56:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.789 04:56:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.789 04:56:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.789 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.789 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.789 04:56:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:29.789 04:56:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.789 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 [ 00:14:29.789 { 00:14:29.789 "name": "Null_1", 00:14:29.789 "aliases": [ 00:14:29.789 "8f2ed66a-d046-4741-9f4e-3aeb0180c84b" 00:14:29.789 ], 00:14:29.789 "product_name": "Null disk", 00:14:29.789 "block_size": 512, 00:14:29.789 "num_blocks": 262144, 00:14:29.789 "uuid": "8f2ed66a-d046-4741-9f4e-3aeb0180c84b", 00:14:29.789 "assigned_rate_limits": { 00:14:29.789 "rw_ios_per_sec": 0, 00:14:29.789 "rw_mbytes_per_sec": 0, 00:14:29.789 "r_mbytes_per_sec": 0, 00:14:29.789 "w_mbytes_per_sec": 0 00:14:29.789 }, 00:14:29.789 "claimed": false, 00:14:29.789 "zoned": false, 00:14:29.789 "supported_io_types": { 00:14:29.789 "read": true, 00:14:29.789 "write": true, 00:14:29.789 "unmap": false, 00:14:29.789 "write_zeroes": true, 00:14:29.789 "flush": false, 00:14:29.789 "reset": true, 00:14:29.789 "compare": false, 00:14:29.789 "compare_and_write": false, 00:14:29.789 "abort": true, 00:14:29.789 "nvme_admin": false, 00:14:29.789 "nvme_io": false 00:14:29.789 }, 00:14:29.789 "driver_specific": {} 00:14:29.789 } 00:14:29.789 ] 00:14:29.789 04:56:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.789 04:56:59 -- common/autotest_common.sh@895 -- # return 0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@455 -- # qos_function_test 00:14:29.789 04:56:59 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:14:29.789 04:56:59 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.789 04:56:59 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:14:29.789 04:56:59 -- bdev/blockdev.sh@410 -- # local io_result=0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:29.789 04:56:59 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:29.789 04:56:59 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:29.789 04:56:59 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:29.789 04:56:59 -- bdev/blockdev.sh@376 -- # tail -1 00:14:29.789 Running I/O for 60 seconds... 00:14:35.054 04:57:04 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 67230.20 268920.80 0.00 0.00 271360.00 0.00 0.00 ' 00:14:35.054 04:57:04 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:35.054 04:57:04 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:35.054 04:57:04 -- bdev/blockdev.sh@378 -- # iostat_result=67230.20 00:14:35.054 04:57:04 -- bdev/blockdev.sh@383 -- # echo 67230 00:14:35.054 04:57:04 -- bdev/blockdev.sh@414 -- # io_result=67230 00:14:35.054 04:57:04 -- bdev/blockdev.sh@416 -- # iops_limit=16000 00:14:35.054 04:57:04 -- bdev/blockdev.sh@417 -- # '[' 16000 -gt 1000 ']' 00:14:35.054 04:57:04 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:14:35.054 04:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.054 04:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:35.054 04:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.054 04:57:04 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:14:35.054 04:57:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:35.054 04:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:35.054 04:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:35.054 ************************************ 00:14:35.054 START TEST bdev_qos_iops 00:14:35.054 ************************************ 00:14:35.054 04:57:04 -- common/autotest_common.sh@1104 -- # run_qos_test 16000 IOPS Malloc_0 00:14:35.054 04:57:04 -- bdev/blockdev.sh@387 -- # local qos_limit=16000 00:14:35.054 04:57:04 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:35.054 04:57:04 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:14:35.054 04:57:04 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:35.054 04:57:04 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:35.054 04:57:04 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:35.054 04:57:04 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:35.054 04:57:04 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:35.054 04:57:04 -- bdev/blockdev.sh@376 -- # tail -1 00:14:40.347 04:57:09 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 16042.36 64169.44 0.00 0.00 65088.00 0.00 0.00 ' 00:14:40.347 04:57:09 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:40.347 04:57:09 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:40.347 04:57:09 -- bdev/blockdev.sh@378 -- # iostat_result=16042.36 00:14:40.347 04:57:09 -- bdev/blockdev.sh@383 -- # echo 16042 00:14:40.347 04:57:09 -- bdev/blockdev.sh@390 -- # qos_result=16042 00:14:40.347 04:57:09 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:40.347 04:57:09 -- bdev/blockdev.sh@394 -- # lower_limit=14400 00:14:40.347 04:57:09 -- bdev/blockdev.sh@395 -- # upper_limit=17600 00:14:40.347 04:57:09 -- bdev/blockdev.sh@398 -- # '[' 16042 -lt 14400 ']' 00:14:40.347 04:57:09 -- bdev/blockdev.sh@398 -- # '[' 16042 -gt 17600 ']' 00:14:40.347 00:14:40.347 real 0m5.216s 00:14:40.347 user 0m0.124s 00:14:40.347 sys 0m0.022s 00:14:40.347 04:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.347 04:57:09 -- common/autotest_common.sh@10 -- # set +x 00:14:40.347 ************************************ 00:14:40.347 END TEST bdev_qos_iops 00:14:40.347 ************************************ 00:14:40.347 04:57:10 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:40.347 04:57:10 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:40.347 04:57:10 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:40.347 04:57:10 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:40.347 04:57:10 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:40.347 04:57:10 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:40.347 04:57:10 -- bdev/blockdev.sh@376 -- # tail -1 00:14:45.609 04:57:15 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 23515.21 94060.83 0.00 0.00 96256.00 0.00 0.00 ' 00:14:45.609 04:57:15 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:45.609 04:57:15 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:45.609 04:57:15 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:45.609 04:57:15 -- bdev/blockdev.sh@380 -- # iostat_result=96256.00 00:14:45.609 04:57:15 -- bdev/blockdev.sh@383 -- # echo 96256 00:14:45.609 04:57:15 -- bdev/blockdev.sh@425 -- # bw_limit=96256 00:14:45.609 04:57:15 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:14:45.609 04:57:15 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:14:45.609 04:57:15 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:14:45.609 04:57:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.609 04:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.609 04:57:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.609 04:57:15 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:14:45.609 04:57:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:45.609 04:57:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:45.609 04:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.609 ************************************ 00:14:45.609 START TEST bdev_qos_bw 00:14:45.609 ************************************ 00:14:45.609 04:57:15 -- common/autotest_common.sh@1104 -- # run_qos_test 9 BANDWIDTH Null_1 00:14:45.609 04:57:15 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:14:45.609 04:57:15 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:45.609 04:57:15 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:45.609 04:57:15 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:45.609 04:57:15 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:45.609 04:57:15 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:45.609 04:57:15 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:45.609 04:57:15 -- bdev/blockdev.sh@376 -- # tail -1 00:14:45.609 04:57:15 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:50.874 04:57:20 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2304.27 9217.10 0.00 0.00 9448.00 0.00 0.00 ' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@380 -- # iostat_result=9448.00 00:14:50.874 04:57:20 -- bdev/blockdev.sh@383 -- # echo 9448 00:14:50.874 04:57:20 -- bdev/blockdev.sh@390 -- # qos_result=9448 00:14:50.874 04:57:20 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:14:50.874 04:57:20 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:14:50.874 04:57:20 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:14:50.874 04:57:20 -- bdev/blockdev.sh@398 -- # '[' 9448 -lt 8294 ']' 00:14:50.874 04:57:20 -- bdev/blockdev.sh@398 -- # '[' 9448 -gt 10137 ']' 00:14:50.874 00:14:50.874 real 0m5.257s 00:14:50.874 user 0m0.117s 00:14:50.874 sys 0m0.028s 00:14:50.874 04:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.874 04:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:50.874 ************************************ 00:14:50.874 END TEST bdev_qos_bw 00:14:50.874 ************************************ 00:14:50.874 04:57:20 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:50.874 04:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.874 04:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:50.874 04:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.874 04:57:20 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:50.874 04:57:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:50.874 04:57:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:50.874 04:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:50.874 ************************************ 00:14:50.874 START TEST bdev_qos_ro_bw 00:14:50.874 ************************************ 00:14:50.874 04:57:20 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:50.874 04:57:20 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:50.874 04:57:20 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:50.874 04:57:20 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:50.874 04:57:20 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:50.874 04:57:20 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:50.874 04:57:20 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:50.874 04:57:20 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:50.874 04:57:20 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:50.874 04:57:20 -- bdev/blockdev.sh@376 -- # tail -1 00:14:56.140 04:57:25 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.38 2045.53 0.00 0.00 2060.00 0.00 0.00 ' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:14:56.140 04:57:25 -- bdev/blockdev.sh@383 -- # echo 2060 00:14:56.140 04:57:25 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:14:56.140 04:57:25 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:56.140 04:57:25 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:56.140 04:57:25 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:56.140 04:57:25 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:14:56.140 04:57:25 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:14:56.140 00:14:56.140 real 0m5.174s 00:14:56.140 user 0m0.126s 00:14:56.140 sys 0m0.025s 00:14:56.140 04:57:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.140 ************************************ 00:14:56.140 END TEST bdev_qos_ro_bw 00:14:56.140 ************************************ 00:14:56.140 04:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:56.140 04:57:25 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:56.140 04:57:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.140 04:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:56.707 04:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.707 04:57:26 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:56.707 04:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.707 04:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:56.707 00:14:56.707 Latency(us) 00:14:56.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.707 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:56.707 Malloc_0 : 26.76 22578.87 88.20 0.00 0.00 11232.27 2978.91 503316.48 00:14:56.707 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:56.707 Null_1 : 26.91 23873.19 93.25 0.00 0.00 10695.42 889.95 148707.14 00:14:56.707 =================================================================================================================== 00:14:56.707 Total : 46452.06 181.45 0.00 0.00 10955.62 889.95 503316.48 00:14:56.707 0 00:14:56.707 04:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.707 04:57:26 -- bdev/blockdev.sh@459 -- # killprocess 123096 00:14:56.707 04:57:26 -- common/autotest_common.sh@926 -- # '[' -z 123096 ']' 00:14:56.707 04:57:26 -- common/autotest_common.sh@930 -- # kill -0 123096 00:14:56.707 04:57:26 -- common/autotest_common.sh@931 -- # uname 00:14:56.707 04:57:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.707 04:57:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123096 00:14:56.707 04:57:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:56.708 04:57:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:56.708 killing process with pid 123096 00:14:56.708 04:57:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123096' 00:14:56.708 04:57:26 -- common/autotest_common.sh@945 -- # kill 123096 00:14:56.708 Received shutdown signal, test time was about 26.945244 seconds 00:14:56.708 00:14:56.708 Latency(us) 00:14:56.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.708 =================================================================================================================== 00:14:56.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.708 04:57:26 -- common/autotest_common.sh@950 -- # wait 123096 00:14:57.276 04:57:26 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:57.276 00:14:57.276 real 0m28.595s 00:14:57.276 user 0m29.373s 00:14:57.276 sys 0m0.698s 00:14:57.276 04:57:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.276 ************************************ 00:14:57.276 04:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:57.276 END TEST bdev_qos 00:14:57.276 ************************************ 00:14:57.276 04:57:27 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:57.276 04:57:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:57.276 04:57:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.276 04:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.276 ************************************ 00:14:57.276 START TEST bdev_qd_sampling 00:14:57.276 ************************************ 00:14:57.276 04:57:27 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:57.276 04:57:27 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:57.276 04:57:27 -- bdev/blockdev.sh@539 -- # QD_PID=123565 00:14:57.276 04:57:27 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:57.276 04:57:27 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 123565' 00:14:57.276 Process bdev QD sampling period testing pid: 123565 00:14:57.276 04:57:27 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:57.276 04:57:27 -- bdev/blockdev.sh@542 -- # waitforlisten 123565 00:14:57.276 04:57:27 -- common/autotest_common.sh@819 -- # '[' -z 123565 ']' 00:14:57.276 04:57:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.276 04:57:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.276 04:57:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.276 04:57:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.276 04:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.276 [2024-04-27 04:57:27.069831] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:57.276 [2024-04-27 04:57:27.070072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123565 ] 00:14:57.534 [2024-04-27 04:57:27.238022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:57.534 [2024-04-27 04:57:27.347425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.534 [2024-04-27 04:57:27.347437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.468 04:57:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.468 04:57:28 -- common/autotest_common.sh@852 -- # return 0 00:14:58.468 04:57:28 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:58.468 04:57:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.468 04:57:28 -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 Malloc_QD 00:14:58.468 04:57:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.468 04:57:28 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:58.468 04:57:28 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:58.468 04:57:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.468 04:57:28 -- common/autotest_common.sh@889 -- # local i 00:14:58.468 04:57:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.468 04:57:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.468 04:57:28 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:58.468 04:57:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.468 04:57:28 -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 04:57:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.468 04:57:28 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:58.468 04:57:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.468 04:57:28 -- common/autotest_common.sh@10 -- # set +x 00:14:58.468 [ 00:14:58.468 { 00:14:58.468 "name": "Malloc_QD", 00:14:58.468 "aliases": [ 00:14:58.468 "74b58645-929c-48f3-9a6f-09d55afce5e0" 00:14:58.468 ], 00:14:58.468 "product_name": "Malloc disk", 00:14:58.468 "block_size": 512, 00:14:58.468 "num_blocks": 262144, 00:14:58.468 "uuid": "74b58645-929c-48f3-9a6f-09d55afce5e0", 00:14:58.468 "assigned_rate_limits": { 00:14:58.468 "rw_ios_per_sec": 0, 00:14:58.468 "rw_mbytes_per_sec": 0, 00:14:58.468 "r_mbytes_per_sec": 0, 00:14:58.468 "w_mbytes_per_sec": 0 00:14:58.468 }, 00:14:58.468 "claimed": false, 00:14:58.468 "zoned": false, 00:14:58.468 "supported_io_types": { 00:14:58.468 "read": true, 00:14:58.468 "write": true, 00:14:58.468 "unmap": true, 00:14:58.468 "write_zeroes": true, 00:14:58.468 "flush": true, 00:14:58.468 "reset": true, 00:14:58.468 "compare": false, 00:14:58.468 "compare_and_write": false, 00:14:58.468 "abort": true, 00:14:58.468 "nvme_admin": false, 00:14:58.468 "nvme_io": false 00:14:58.468 }, 00:14:58.468 "memory_domains": [ 00:14:58.468 { 00:14:58.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.468 "dma_device_type": 2 00:14:58.468 } 00:14:58.468 ], 00:14:58.468 "driver_specific": {} 00:14:58.468 } 00:14:58.468 ] 00:14:58.468 04:57:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.468 04:57:28 -- common/autotest_common.sh@895 -- # return 0 00:14:58.468 04:57:28 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:58.468 04:57:28 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:58.468 Running I/O for 5 seconds... 00:15:00.368 04:57:30 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:15:00.368 04:57:30 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:15:00.368 04:57:30 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:15:00.368 04:57:30 -- bdev/blockdev.sh@519 -- # local iostats 00:15:00.368 04:57:30 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:00.368 04:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.368 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 04:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.368 04:57:30 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:00.368 04:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.368 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.368 04:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.368 04:57:30 -- bdev/blockdev.sh@523 -- # iostats='{ 00:15:00.368 "tick_rate": 2200000000, 00:15:00.368 "ticks": 1742380855870, 00:15:00.368 "bdevs": [ 00:15:00.368 { 00:15:00.368 "name": "Malloc_QD", 00:15:00.368 "bytes_read": 869306880, 00:15:00.368 "num_read_ops": 212227, 00:15:00.368 "bytes_written": 0, 00:15:00.368 "num_write_ops": 0, 00:15:00.368 "bytes_unmapped": 0, 00:15:00.368 "num_unmap_ops": 0, 00:15:00.368 "bytes_copied": 0, 00:15:00.368 "num_copy_ops": 0, 00:15:00.368 "read_latency_ticks": 2148816465832, 00:15:00.368 "max_read_latency_ticks": 24003469, 00:15:00.368 "min_read_latency_ticks": 486724, 00:15:00.368 "write_latency_ticks": 0, 00:15:00.368 "max_write_latency_ticks": 0, 00:15:00.368 "min_write_latency_ticks": 0, 00:15:00.368 "unmap_latency_ticks": 0, 00:15:00.368 "max_unmap_latency_ticks": 0, 00:15:00.368 "min_unmap_latency_ticks": 0, 00:15:00.368 "copy_latency_ticks": 0, 00:15:00.368 "max_copy_latency_ticks": 0, 00:15:00.368 "min_copy_latency_ticks": 0, 00:15:00.368 "io_error": {}, 00:15:00.368 "queue_depth_polling_period": 10, 00:15:00.368 "queue_depth": 512, 00:15:00.368 "io_time": 20, 00:15:00.368 "weighted_io_time": 10240 00:15:00.368 } 00:15:00.368 ] 00:15:00.368 }' 00:15:00.368 04:57:30 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:00.368 04:57:30 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:15:00.368 04:57:30 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:15:00.368 04:57:30 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:15:00.369 04:57:30 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:00.369 04:57:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.369 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.369 00:15:00.369 Latency(us) 00:15:00.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.369 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:00.369 Malloc_QD : 1.99 54973.89 214.74 0.00 0.00 4643.85 1213.91 10962.39 00:15:00.369 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:00.369 Malloc_QD : 1.99 55977.86 218.66 0.00 0.00 4561.84 878.78 5153.51 00:15:00.369 =================================================================================================================== 00:15:00.369 Total : 110951.74 433.41 0.00 0.00 4602.47 878.78 10962.39 00:15:00.627 0 00:15:00.627 04:57:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.627 04:57:30 -- bdev/blockdev.sh@552 -- # killprocess 123565 00:15:00.627 04:57:30 -- common/autotest_common.sh@926 -- # '[' -z 123565 ']' 00:15:00.627 04:57:30 -- common/autotest_common.sh@930 -- # kill -0 123565 00:15:00.627 04:57:30 -- common/autotest_common.sh@931 -- # uname 00:15:00.627 04:57:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.627 04:57:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123565 00:15:00.627 04:57:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:00.627 04:57:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:00.627 killing process with pid 123565 00:15:00.627 04:57:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123565' 00:15:00.627 Received shutdown signal, test time was about 2.054842 seconds 00:15:00.627 00:15:00.627 Latency(us) 00:15:00.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.627 =================================================================================================================== 00:15:00.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.627 04:57:30 -- common/autotest_common.sh@945 -- # kill 123565 00:15:00.627 04:57:30 -- common/autotest_common.sh@950 -- # wait 123565 00:15:00.884 04:57:30 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:15:00.884 00:15:00.884 real 0m3.686s 00:15:00.884 user 0m7.124s 00:15:00.884 sys 0m0.395s 00:15:00.884 04:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.884 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.884 ************************************ 00:15:00.884 END TEST bdev_qd_sampling 00:15:00.884 ************************************ 00:15:00.884 04:57:30 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:15:00.884 04:57:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.884 04:57:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.884 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.884 ************************************ 00:15:00.884 START TEST bdev_error 00:15:00.884 ************************************ 00:15:00.885 04:57:30 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:15:00.885 04:57:30 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:15:00.885 04:57:30 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:15:00.885 04:57:30 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:15:00.885 04:57:30 -- bdev/blockdev.sh@470 -- # ERR_PID=123641 00:15:00.885 Process error testing pid: 123641 00:15:00.885 04:57:30 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 123641' 00:15:00.885 04:57:30 -- bdev/blockdev.sh@472 -- # waitforlisten 123641 00:15:00.885 04:57:30 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:00.885 04:57:30 -- common/autotest_common.sh@819 -- # '[' -z 123641 ']' 00:15:00.885 04:57:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.885 04:57:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.885 04:57:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.885 04:57:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.885 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:01.143 [2024-04-27 04:57:30.822357] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:01.143 [2024-04-27 04:57:30.822651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123641 ] 00:15:01.143 [2024-04-27 04:57:30.991631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.401 [2024-04-27 04:57:31.093198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.969 04:57:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.969 04:57:31 -- common/autotest_common.sh@852 -- # return 0 00:15:01.969 04:57:31 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:01.969 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.969 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.969 Dev_1 00:15:01.969 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.969 04:57:31 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:15:01.969 04:57:31 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:15:01.969 04:57:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.969 04:57:31 -- common/autotest_common.sh@889 -- # local i 00:15:01.969 04:57:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.969 04:57:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.969 04:57:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:15:01.969 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.969 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.969 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.969 04:57:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:01.969 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.969 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.969 [ 00:15:01.969 { 00:15:01.969 "name": "Dev_1", 00:15:01.969 "aliases": [ 00:15:01.969 "35f3e897-2f3b-42cf-a2de-77cd37d36396" 00:15:01.969 ], 00:15:01.969 "product_name": "Malloc disk", 00:15:01.969 "block_size": 512, 00:15:01.969 "num_blocks": 262144, 00:15:01.969 "uuid": "35f3e897-2f3b-42cf-a2de-77cd37d36396", 00:15:01.969 "assigned_rate_limits": { 00:15:01.969 "rw_ios_per_sec": 0, 00:15:01.969 "rw_mbytes_per_sec": 0, 00:15:01.969 "r_mbytes_per_sec": 0, 00:15:01.969 "w_mbytes_per_sec": 0 00:15:01.969 }, 00:15:01.969 "claimed": false, 00:15:01.969 "zoned": false, 00:15:01.969 "supported_io_types": { 00:15:01.969 "read": true, 00:15:01.969 "write": true, 00:15:01.969 "unmap": true, 00:15:01.969 "write_zeroes": true, 00:15:01.969 "flush": true, 00:15:01.969 "reset": true, 00:15:01.969 "compare": false, 00:15:01.969 "compare_and_write": false, 00:15:01.969 "abort": true, 00:15:01.969 "nvme_admin": false, 00:15:01.969 "nvme_io": false 00:15:01.969 }, 00:15:01.969 "memory_domains": [ 00:15:01.969 { 00:15:01.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.969 "dma_device_type": 2 00:15:01.969 } 00:15:01.969 ], 00:15:01.969 "driver_specific": {} 00:15:01.969 } 00:15:01.969 ] 00:15:01.969 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.969 04:57:31 -- common/autotest_common.sh@895 -- # return 0 00:15:01.969 04:57:31 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:15:01.969 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.969 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.969 true 00:15:01.969 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.969 04:57:31 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:01.969 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.969 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.281 Dev_2 00:15:02.281 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.281 04:57:31 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:15:02.281 04:57:31 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:15:02.281 04:57:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.281 04:57:31 -- common/autotest_common.sh@889 -- # local i 00:15:02.281 04:57:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.282 04:57:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.282 04:57:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:15:02.282 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.282 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.282 04:57:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:02.282 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.282 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 [ 00:15:02.282 { 00:15:02.282 "name": "Dev_2", 00:15:02.282 "aliases": [ 00:15:02.282 "de01e185-8e3b-467a-a4c2-18594c4c75f7" 00:15:02.282 ], 00:15:02.282 "product_name": "Malloc disk", 00:15:02.282 "block_size": 512, 00:15:02.282 "num_blocks": 262144, 00:15:02.282 "uuid": "de01e185-8e3b-467a-a4c2-18594c4c75f7", 00:15:02.282 "assigned_rate_limits": { 00:15:02.282 "rw_ios_per_sec": 0, 00:15:02.282 "rw_mbytes_per_sec": 0, 00:15:02.282 "r_mbytes_per_sec": 0, 00:15:02.282 "w_mbytes_per_sec": 0 00:15:02.282 }, 00:15:02.282 "claimed": false, 00:15:02.282 "zoned": false, 00:15:02.282 "supported_io_types": { 00:15:02.282 "read": true, 00:15:02.282 "write": true, 00:15:02.282 "unmap": true, 00:15:02.282 "write_zeroes": true, 00:15:02.282 "flush": true, 00:15:02.282 "reset": true, 00:15:02.282 "compare": false, 00:15:02.282 "compare_and_write": false, 00:15:02.282 "abort": true, 00:15:02.282 "nvme_admin": false, 00:15:02.282 "nvme_io": false 00:15:02.282 }, 00:15:02.282 "memory_domains": [ 00:15:02.282 { 00:15:02.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.282 "dma_device_type": 2 00:15:02.282 } 00:15:02.282 ], 00:15:02.282 "driver_specific": {} 00:15:02.282 } 00:15:02.282 ] 00:15:02.282 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.282 04:57:31 -- common/autotest_common.sh@895 -- # return 0 00:15:02.282 04:57:31 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:02.282 04:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.282 04:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 04:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.282 04:57:31 -- bdev/blockdev.sh@482 -- # sleep 1 00:15:02.282 04:57:31 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:02.282 Running I/O for 5 seconds... 00:15:03.214 04:57:32 -- bdev/blockdev.sh@485 -- # kill -0 123641 00:15:03.214 Process is existed as continue on error is set. Pid: 123641 00:15:03.214 04:57:32 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 123641' 00:15:03.214 04:57:32 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:03.214 04:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.214 04:57:32 -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 04:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.214 04:57:32 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:03.214 04:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.214 04:57:32 -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 04:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.214 04:57:32 -- bdev/blockdev.sh@495 -- # sleep 5 00:15:03.214 Timeout while waiting for response: 00:15:03.214 00:15:03.214 00:15:07.397 00:15:07.397 Latency(us) 00:15:07.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.397 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:07.397 EE_Dev_1 : 0.92 34683.19 135.48 5.41 0.00 457.89 212.25 1280.93 00:15:07.397 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:07.397 Dev_2 : 5.00 89959.95 351.41 0.00 0.00 175.11 57.48 36938.47 00:15:07.397 =================================================================================================================== 00:15:07.397 Total : 124643.14 486.89 5.41 0.00 193.92 57.48 36938.47 00:15:08.333 04:57:37 -- bdev/blockdev.sh@497 -- # killprocess 123641 00:15:08.333 04:57:37 -- common/autotest_common.sh@926 -- # '[' -z 123641 ']' 00:15:08.333 04:57:37 -- common/autotest_common.sh@930 -- # kill -0 123641 00:15:08.333 04:57:37 -- common/autotest_common.sh@931 -- # uname 00:15:08.333 04:57:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.333 04:57:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123641 00:15:08.333 04:57:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:08.333 04:57:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:08.333 killing process with pid 123641 00:15:08.333 04:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123641' 00:15:08.333 Received shutdown signal, test time was about 5.000000 seconds 00:15:08.333 00:15:08.333 Latency(us) 00:15:08.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.333 =================================================================================================================== 00:15:08.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.333 04:57:38 -- common/autotest_common.sh@945 -- # kill 123641 00:15:08.333 04:57:38 -- common/autotest_common.sh@950 -- # wait 123641 00:15:08.592 04:57:38 -- bdev/blockdev.sh@501 -- # ERR_PID=123749 00:15:08.592 04:57:38 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:15:08.592 Process error testing pid: 123749 00:15:08.592 04:57:38 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 123749' 00:15:08.592 04:57:38 -- bdev/blockdev.sh@503 -- # waitforlisten 123749 00:15:08.592 04:57:38 -- common/autotest_common.sh@819 -- # '[' -z 123749 ']' 00:15:08.592 04:57:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.592 04:57:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.592 04:57:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.592 04:57:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.592 04:57:38 -- common/autotest_common.sh@10 -- # set +x 00:15:08.851 [2024-04-27 04:57:38.514654] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:08.851 [2024-04-27 04:57:38.514940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123749 ] 00:15:08.851 [2024-04-27 04:57:38.679645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.110 [2024-04-27 04:57:38.803464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.679 04:57:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:09.679 04:57:39 -- common/autotest_common.sh@852 -- # return 0 00:15:09.679 04:57:39 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:09.679 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.679 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 Dev_1 00:15:09.679 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.679 04:57:39 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:15:09.679 04:57:39 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:15:09.679 04:57:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.679 04:57:39 -- common/autotest_common.sh@889 -- # local i 00:15:09.679 04:57:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.679 04:57:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.679 04:57:39 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:15:09.679 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.679 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.679 04:57:39 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:09.679 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.679 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 [ 00:15:09.679 { 00:15:09.679 "name": "Dev_1", 00:15:09.679 "aliases": [ 00:15:09.679 "0e7d8031-b625-431c-a334-7f1a49da108b" 00:15:09.679 ], 00:15:09.679 "product_name": "Malloc disk", 00:15:09.679 "block_size": 512, 00:15:09.679 "num_blocks": 262144, 00:15:09.679 "uuid": "0e7d8031-b625-431c-a334-7f1a49da108b", 00:15:09.679 "assigned_rate_limits": { 00:15:09.679 "rw_ios_per_sec": 0, 00:15:09.679 "rw_mbytes_per_sec": 0, 00:15:09.679 "r_mbytes_per_sec": 0, 00:15:09.679 "w_mbytes_per_sec": 0 00:15:09.679 }, 00:15:09.680 "claimed": false, 00:15:09.680 "zoned": false, 00:15:09.680 "supported_io_types": { 00:15:09.680 "read": true, 00:15:09.680 "write": true, 00:15:09.680 "unmap": true, 00:15:09.680 "write_zeroes": true, 00:15:09.680 "flush": true, 00:15:09.680 "reset": true, 00:15:09.680 "compare": false, 00:15:09.680 "compare_and_write": false, 00:15:09.680 "abort": true, 00:15:09.680 "nvme_admin": false, 00:15:09.680 "nvme_io": false 00:15:09.680 }, 00:15:09.680 "memory_domains": [ 00:15:09.680 { 00:15:09.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.680 "dma_device_type": 2 00:15:09.680 } 00:15:09.680 ], 00:15:09.680 "driver_specific": {} 00:15:09.680 } 00:15:09.680 ] 00:15:09.680 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.680 04:57:39 -- common/autotest_common.sh@895 -- # return 0 00:15:09.680 04:57:39 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:15:09.680 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.680 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.680 true 00:15:09.680 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.680 04:57:39 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:09.680 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.680 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.938 Dev_2 00:15:09.938 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.938 04:57:39 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:15:09.938 04:57:39 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:15:09.938 04:57:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.938 04:57:39 -- common/autotest_common.sh@889 -- # local i 00:15:09.938 04:57:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.938 04:57:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.938 04:57:39 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:15:09.938 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.938 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.939 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.939 04:57:39 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:09.939 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.939 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.939 [ 00:15:09.939 { 00:15:09.939 "name": "Dev_2", 00:15:09.939 "aliases": [ 00:15:09.939 "3b1ddb2a-1eac-4cb2-813f-abef5458f721" 00:15:09.939 ], 00:15:09.939 "product_name": "Malloc disk", 00:15:09.939 "block_size": 512, 00:15:09.939 "num_blocks": 262144, 00:15:09.939 "uuid": "3b1ddb2a-1eac-4cb2-813f-abef5458f721", 00:15:09.939 "assigned_rate_limits": { 00:15:09.939 "rw_ios_per_sec": 0, 00:15:09.939 "rw_mbytes_per_sec": 0, 00:15:09.939 "r_mbytes_per_sec": 0, 00:15:09.939 "w_mbytes_per_sec": 0 00:15:09.939 }, 00:15:09.939 "claimed": false, 00:15:09.939 "zoned": false, 00:15:09.939 "supported_io_types": { 00:15:09.939 "read": true, 00:15:09.939 "write": true, 00:15:09.939 "unmap": true, 00:15:09.939 "write_zeroes": true, 00:15:09.939 "flush": true, 00:15:09.939 "reset": true, 00:15:09.939 "compare": false, 00:15:09.939 "compare_and_write": false, 00:15:09.939 "abort": true, 00:15:09.939 "nvme_admin": false, 00:15:09.939 "nvme_io": false 00:15:09.939 }, 00:15:09.939 "memory_domains": [ 00:15:09.939 { 00:15:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.939 "dma_device_type": 2 00:15:09.939 } 00:15:09.939 ], 00:15:09.939 "driver_specific": {} 00:15:09.939 } 00:15:09.939 ] 00:15:09.939 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.939 04:57:39 -- common/autotest_common.sh@895 -- # return 0 00:15:09.939 04:57:39 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:09.939 04:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.939 04:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.939 04:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.939 04:57:39 -- bdev/blockdev.sh@513 -- # NOT wait 123749 00:15:09.939 04:57:39 -- common/autotest_common.sh@640 -- # local es=0 00:15:09.939 04:57:39 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:09.939 04:57:39 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 123749 00:15:09.939 04:57:39 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:09.939 04:57:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.939 04:57:39 -- common/autotest_common.sh@632 -- # type -t wait 00:15:09.939 04:57:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.939 04:57:39 -- common/autotest_common.sh@643 -- # wait 123749 00:15:09.939 Running I/O for 5 seconds... 00:15:09.939 task offset: 240000 on job bdev=EE_Dev_1 fails 00:15:09.939 00:15:09.939 Latency(us) 00:15:09.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.939 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:09.939 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:09.939 EE_Dev_1 : 0.00 20618.56 80.54 4686.04 0.00 516.21 219.69 942.08 00:15:09.939 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:09.939 Dev_2 : 0.00 17400.76 67.97 0.00 0.00 581.53 202.01 1027.72 00:15:09.939 =================================================================================================================== 00:15:09.939 Total : 38019.32 148.51 4686.04 0.00 551.63 202.01 1027.72 00:15:09.939 [2024-04-27 04:57:39.762778] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.939 request: 00:15:09.939 { 00:15:09.939 "method": "perform_tests", 00:15:09.939 "req_id": 1 00:15:09.939 } 00:15:09.939 Got JSON-RPC error response 00:15:09.939 response: 00:15:09.939 { 00:15:09.939 "code": -32603, 00:15:09.939 "message": "bdevperf failed with error Operation not permitted" 00:15:09.939 } 00:15:10.504 04:57:40 -- common/autotest_common.sh@643 -- # es=255 00:15:10.504 04:57:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:10.504 04:57:40 -- common/autotest_common.sh@652 -- # es=127 00:15:10.504 04:57:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:15:10.504 04:57:40 -- common/autotest_common.sh@660 -- # es=1 00:15:10.504 04:57:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:10.504 00:15:10.504 real 0m9.580s 00:15:10.504 user 0m9.682s 00:15:10.504 sys 0m0.910s 00:15:10.504 04:57:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.504 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.504 ************************************ 00:15:10.504 END TEST bdev_error 00:15:10.504 ************************************ 00:15:10.504 04:57:40 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:15:10.504 04:57:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:10.504 04:57:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.504 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.762 ************************************ 00:15:10.762 START TEST bdev_stat 00:15:10.762 ************************************ 00:15:10.762 04:57:40 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:15:10.762 04:57:40 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:15:10.762 04:57:40 -- bdev/blockdev.sh@594 -- # STAT_PID=123797 00:15:10.762 Process Bdev IO statistics testing pid: 123797 00:15:10.762 04:57:40 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 123797' 00:15:10.762 04:57:40 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:15:10.762 04:57:40 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:15:10.762 04:57:40 -- bdev/blockdev.sh@597 -- # waitforlisten 123797 00:15:10.762 04:57:40 -- common/autotest_common.sh@819 -- # '[' -z 123797 ']' 00:15:10.762 04:57:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.762 04:57:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.762 04:57:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.762 04:57:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.762 04:57:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.762 [2024-04-27 04:57:40.457392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.762 [2024-04-27 04:57:40.458271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123797 ] 00:15:10.762 [2024-04-27 04:57:40.635608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.019 [2024-04-27 04:57:40.774385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.019 [2024-04-27 04:57:40.774395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.586 04:57:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.586 04:57:41 -- common/autotest_common.sh@852 -- # return 0 00:15:11.586 04:57:41 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:15:11.586 04:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.586 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:11.845 Malloc_STAT 00:15:11.845 04:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.845 04:57:41 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:15:11.845 04:57:41 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:15:11.845 04:57:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:11.845 04:57:41 -- common/autotest_common.sh@889 -- # local i 00:15:11.845 04:57:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:11.845 04:57:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:11.845 04:57:41 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:15:11.845 04:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.845 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:11.845 04:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.845 04:57:41 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:15:11.845 04:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.845 04:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:11.845 [ 00:15:11.845 { 00:15:11.845 "name": "Malloc_STAT", 00:15:11.845 "aliases": [ 00:15:11.845 "47f7d71e-6219-4e5d-956a-b3ad025c2721" 00:15:11.845 ], 00:15:11.845 "product_name": "Malloc disk", 00:15:11.845 "block_size": 512, 00:15:11.845 "num_blocks": 262144, 00:15:11.845 "uuid": "47f7d71e-6219-4e5d-956a-b3ad025c2721", 00:15:11.845 "assigned_rate_limits": { 00:15:11.845 "rw_ios_per_sec": 0, 00:15:11.845 "rw_mbytes_per_sec": 0, 00:15:11.845 "r_mbytes_per_sec": 0, 00:15:11.845 "w_mbytes_per_sec": 0 00:15:11.845 }, 00:15:11.845 "claimed": false, 00:15:11.845 "zoned": false, 00:15:11.845 "supported_io_types": { 00:15:11.845 "read": true, 00:15:11.845 "write": true, 00:15:11.845 "unmap": true, 00:15:11.845 "write_zeroes": true, 00:15:11.845 "flush": true, 00:15:11.845 "reset": true, 00:15:11.845 "compare": false, 00:15:11.845 "compare_and_write": false, 00:15:11.845 "abort": true, 00:15:11.845 "nvme_admin": false, 00:15:11.845 "nvme_io": false 00:15:11.845 }, 00:15:11.845 "memory_domains": [ 00:15:11.845 { 00:15:11.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.845 "dma_device_type": 2 00:15:11.845 } 00:15:11.845 ], 00:15:11.845 "driver_specific": {} 00:15:11.845 } 00:15:11.845 ] 00:15:11.845 04:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.845 04:57:41 -- common/autotest_common.sh@895 -- # return 0 00:15:11.845 04:57:41 -- bdev/blockdev.sh@603 -- # sleep 2 00:15:11.845 04:57:41 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:11.845 Running I/O for 10 seconds... 00:15:13.745 04:57:43 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:15:13.745 04:57:43 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:15:13.745 04:57:43 -- bdev/blockdev.sh@558 -- # local iostats 00:15:13.745 04:57:43 -- bdev/blockdev.sh@559 -- # local io_count1 00:15:13.745 04:57:43 -- bdev/blockdev.sh@560 -- # local io_count2 00:15:13.745 04:57:43 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:15:13.745 04:57:43 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:15:13.745 04:57:43 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:15:13.745 04:57:43 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:15:13.745 04:57:43 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:13.745 04:57:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.745 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.745 04:57:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.745 04:57:43 -- bdev/blockdev.sh@566 -- # iostats='{ 00:15:13.745 "tick_rate": 2200000000, 00:15:13.745 "ticks": 1771847850389, 00:15:13.745 "bdevs": [ 00:15:13.745 { 00:15:13.745 "name": "Malloc_STAT", 00:15:13.745 "bytes_read": 887132672, 00:15:13.745 "num_read_ops": 216579, 00:15:13.745 "bytes_written": 0, 00:15:13.745 "num_write_ops": 0, 00:15:13.745 "bytes_unmapped": 0, 00:15:13.745 "num_unmap_ops": 0, 00:15:13.745 "bytes_copied": 0, 00:15:13.745 "num_copy_ops": 0, 00:15:13.745 "read_latency_ticks": 2145322410318, 00:15:13.745 "max_read_latency_ticks": 13728346, 00:15:13.745 "min_read_latency_ticks": 611126, 00:15:13.745 "write_latency_ticks": 0, 00:15:13.745 "max_write_latency_ticks": 0, 00:15:13.745 "min_write_latency_ticks": 0, 00:15:13.745 "unmap_latency_ticks": 0, 00:15:13.745 "max_unmap_latency_ticks": 0, 00:15:13.745 "min_unmap_latency_ticks": 0, 00:15:13.745 "copy_latency_ticks": 0, 00:15:13.745 "max_copy_latency_ticks": 0, 00:15:13.745 "min_copy_latency_ticks": 0, 00:15:13.745 "io_error": {} 00:15:13.745 } 00:15:13.745 ] 00:15:13.745 }' 00:15:13.745 04:57:43 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:15:13.745 04:57:43 -- bdev/blockdev.sh@567 -- # io_count1=216579 00:15:13.745 04:57:43 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:15:13.745 04:57:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.745 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 04:57:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.004 04:57:43 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:15:14.004 "tick_rate": 2200000000, 00:15:14.004 "ticks": 1772002394008, 00:15:14.004 "name": "Malloc_STAT", 00:15:14.004 "channels": [ 00:15:14.004 { 00:15:14.004 "thread_id": 2, 00:15:14.004 "bytes_read": 458227712, 00:15:14.004 "num_read_ops": 111872, 00:15:14.004 "bytes_written": 0, 00:15:14.004 "num_write_ops": 0, 00:15:14.004 "bytes_unmapped": 0, 00:15:14.004 "num_unmap_ops": 0, 00:15:14.004 "bytes_copied": 0, 00:15:14.004 "num_copy_ops": 0, 00:15:14.004 "read_latency_ticks": 1110889744114, 00:15:14.004 "max_read_latency_ticks": 14331147, 00:15:14.004 "min_read_latency_ticks": 9070432, 00:15:14.004 "write_latency_ticks": 0, 00:15:14.004 "max_write_latency_ticks": 0, 00:15:14.004 "min_write_latency_ticks": 0, 00:15:14.004 "unmap_latency_ticks": 0, 00:15:14.004 "max_unmap_latency_ticks": 0, 00:15:14.004 "min_unmap_latency_ticks": 0, 00:15:14.004 "copy_latency_ticks": 0, 00:15:14.004 "max_copy_latency_ticks": 0, 00:15:14.004 "min_copy_latency_ticks": 0 00:15:14.004 }, 00:15:14.004 { 00:15:14.004 "thread_id": 3, 00:15:14.004 "bytes_read": 460324864, 00:15:14.004 "num_read_ops": 112384, 00:15:14.004 "bytes_written": 0, 00:15:14.004 "num_write_ops": 0, 00:15:14.004 "bytes_unmapped": 0, 00:15:14.004 "num_unmap_ops": 0, 00:15:14.004 "bytes_copied": 0, 00:15:14.004 "num_copy_ops": 0, 00:15:14.004 "read_latency_ticks": 1111558845627, 00:15:14.004 "max_read_latency_ticks": 11760362, 00:15:14.004 "min_read_latency_ticks": 8169714, 00:15:14.004 "write_latency_ticks": 0, 00:15:14.004 "max_write_latency_ticks": 0, 00:15:14.004 "min_write_latency_ticks": 0, 00:15:14.004 "unmap_latency_ticks": 0, 00:15:14.004 "max_unmap_latency_ticks": 0, 00:15:14.004 "min_unmap_latency_ticks": 0, 00:15:14.004 "copy_latency_ticks": 0, 00:15:14.004 "max_copy_latency_ticks": 0, 00:15:14.004 "min_copy_latency_ticks": 0 00:15:14.004 } 00:15:14.004 ] 00:15:14.004 }' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=111872 00:15:14.004 04:57:43 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=111872 00:15:14.004 04:57:43 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=112384 00:15:14.004 04:57:43 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=224256 00:15:14.004 04:57:43 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:14.004 04:57:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.004 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 04:57:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.004 04:57:43 -- bdev/blockdev.sh@575 -- # iostats='{ 00:15:14.004 "tick_rate": 2200000000, 00:15:14.004 "ticks": 1772271104749, 00:15:14.004 "bdevs": [ 00:15:14.004 { 00:15:14.004 "name": "Malloc_STAT", 00:15:14.004 "bytes_read": 974164480, 00:15:14.004 "num_read_ops": 237827, 00:15:14.004 "bytes_written": 0, 00:15:14.004 "num_write_ops": 0, 00:15:14.004 "bytes_unmapped": 0, 00:15:14.004 "num_unmap_ops": 0, 00:15:14.004 "bytes_copied": 0, 00:15:14.004 "num_copy_ops": 0, 00:15:14.004 "read_latency_ticks": 2360341216005, 00:15:14.004 "max_read_latency_ticks": 14600382, 00:15:14.004 "min_read_latency_ticks": 611126, 00:15:14.004 "write_latency_ticks": 0, 00:15:14.004 "max_write_latency_ticks": 0, 00:15:14.004 "min_write_latency_ticks": 0, 00:15:14.004 "unmap_latency_ticks": 0, 00:15:14.004 "max_unmap_latency_ticks": 0, 00:15:14.004 "min_unmap_latency_ticks": 0, 00:15:14.004 "copy_latency_ticks": 0, 00:15:14.004 "max_copy_latency_ticks": 0, 00:15:14.004 "min_copy_latency_ticks": 0, 00:15:14.004 "io_error": {} 00:15:14.004 } 00:15:14.004 ] 00:15:14.004 }' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@576 -- # io_count2=237827 00:15:14.004 04:57:43 -- bdev/blockdev.sh@581 -- # '[' 224256 -lt 216579 ']' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@581 -- # '[' 224256 -gt 237827 ']' 00:15:14.004 04:57:43 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:15:14.004 04:57:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.004 04:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.004 00:15:14.004 Latency(us) 00:15:14.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.004 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:14.004 Malloc_STAT : 2.18 56370.95 220.20 0.00 0.00 4529.95 1333.06 6642.97 00:15:14.004 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:14.004 Malloc_STAT : 2.18 56814.21 221.93 0.00 0.00 4494.90 901.12 5362.04 00:15:14.004 =================================================================================================================== 00:15:14.004 Total : 113185.15 442.13 0.00 0.00 4512.35 901.12 6642.97 00:15:14.004 0 00:15:14.004 04:57:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.004 04:57:43 -- bdev/blockdev.sh@607 -- # killprocess 123797 00:15:14.004 04:57:43 -- common/autotest_common.sh@926 -- # '[' -z 123797 ']' 00:15:14.004 04:57:43 -- common/autotest_common.sh@930 -- # kill -0 123797 00:15:14.004 04:57:43 -- common/autotest_common.sh@931 -- # uname 00:15:14.004 04:57:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.004 04:57:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123797 00:15:14.262 04:57:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.262 04:57:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.262 killing process with pid 123797 00:15:14.262 04:57:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123797' 00:15:14.262 Received shutdown signal, test time was about 2.246651 seconds 00:15:14.262 00:15:14.262 Latency(us) 00:15:14.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.262 =================================================================================================================== 00:15:14.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.262 04:57:43 -- common/autotest_common.sh@945 -- # kill 123797 00:15:14.262 04:57:43 -- common/autotest_common.sh@950 -- # wait 123797 00:15:14.521 04:57:44 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:15:14.521 00:15:14.521 real 0m3.914s 00:15:14.521 user 0m7.649s 00:15:14.521 sys 0m0.482s 00:15:14.521 04:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.521 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.521 ************************************ 00:15:14.521 END TEST bdev_stat 00:15:14.521 ************************************ 00:15:14.521 04:57:44 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:15:14.521 04:57:44 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:15:14.521 04:57:44 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:15:14.521 04:57:44 -- bdev/blockdev.sh@809 -- # cleanup 00:15:14.521 04:57:44 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:14.521 04:57:44 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:14.521 04:57:44 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:15:14.521 04:57:44 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:15:14.521 04:57:44 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:15:14.521 04:57:44 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:15:14.521 00:15:14.521 real 2m2.122s 00:15:14.521 user 5m19.166s 00:15:14.521 sys 0m22.004s 00:15:14.521 04:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.521 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.521 ************************************ 00:15:14.521 END TEST blockdev_general 00:15:14.521 ************************************ 00:15:14.521 04:57:44 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:14.521 04:57:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:14.521 04:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.521 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.521 ************************************ 00:15:14.521 START TEST bdev_raid 00:15:14.521 ************************************ 00:15:14.521 04:57:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:14.780 * Looking for test storage... 00:15:14.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:14.780 04:57:44 -- bdev/nbd_common.sh@6 -- # set -e 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@716 -- # uname -s 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:15:14.780 04:57:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:14.780 04:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:14.780 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.780 ************************************ 00:15:14.780 START TEST raid_function_test_raid0 00:15:14.780 ************************************ 00:15:14.780 04:57:44 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@86 -- # raid_pid=123941 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 123941' 00:15:14.780 Process raid pid: 123941 00:15:14.780 04:57:44 -- bdev/bdev_raid.sh@88 -- # waitforlisten 123941 /var/tmp/spdk-raid.sock 00:15:14.780 04:57:44 -- common/autotest_common.sh@819 -- # '[' -z 123941 ']' 00:15:14.780 04:57:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:14.780 04:57:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:14.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:14.780 04:57:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:14.780 04:57:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:14.780 04:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.780 [2024-04-27 04:57:44.574601] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:14.780 [2024-04-27 04:57:44.574951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.039 [2024-04-27 04:57:44.755764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.039 [2024-04-27 04:57:44.868103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.297 [2024-04-27 04:57:44.974152] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.863 04:57:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:15.863 04:57:45 -- common/autotest_common.sh@852 -- # return 0 00:15:15.863 04:57:45 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:15:15.863 04:57:45 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:15:15.863 04:57:45 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:15.863 04:57:45 -- bdev/bdev_raid.sh@70 -- # cat 00:15:15.863 04:57:45 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:16.121 [2024-04-27 04:57:45.793644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:16.121 [2024-04-27 04:57:45.796205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:16.121 [2024-04-27 04:57:45.796358] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:16.121 [2024-04-27 04:57:45.796373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:16.121 [2024-04-27 04:57:45.796625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:16.121 [2024-04-27 04:57:45.797129] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:16.121 [2024-04-27 04:57:45.797153] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:15:16.121 [2024-04-27 04:57:45.797383] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.121 Base_1 00:15:16.121 Base_2 00:15:16.121 04:57:45 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:16.121 04:57:45 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:16.121 04:57:45 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.380 04:57:46 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:16.380 04:57:46 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:16.380 04:57:46 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@12 -- # local i 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.380 04:57:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:16.638 [2024-04-27 04:57:46.309890] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:16.638 /dev/nbd0 00:15:16.638 04:57:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.638 04:57:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.639 04:57:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:16.639 04:57:46 -- common/autotest_common.sh@857 -- # local i 00:15:16.639 04:57:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:16.639 04:57:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:16.639 04:57:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:16.639 04:57:46 -- common/autotest_common.sh@861 -- # break 00:15:16.639 04:57:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:16.639 04:57:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:16.639 04:57:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.639 1+0 records in 00:15:16.639 1+0 records out 00:15:16.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360125 s, 11.4 MB/s 00:15:16.639 04:57:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.639 04:57:46 -- common/autotest_common.sh@874 -- # size=4096 00:15:16.639 04:57:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.639 04:57:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:16.639 04:57:46 -- common/autotest_common.sh@877 -- # return 0 00:15:16.639 04:57:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.639 04:57:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.639 04:57:46 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:16.639 04:57:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:16.639 04:57:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:16.897 { 00:15:16.897 "nbd_device": "/dev/nbd0", 00:15:16.897 "bdev_name": "raid" 00:15:16.897 } 00:15:16.897 ]' 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:16.897 { 00:15:16.897 "nbd_device": "/dev/nbd0", 00:15:16.897 "bdev_name": "raid" 00:15:16.897 } 00:15:16.897 ]' 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@65 -- # count=1 00:15:16.897 04:57:46 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:16.897 4096+0 records in 00:15:16.897 4096+0 records out 00:15:16.897 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0258572 s, 81.1 MB/s 00:15:16.897 04:57:46 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:17.156 4096+0 records in 00:15:17.156 4096+0 records out 00:15:17.156 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.281333 s, 7.5 MB/s 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:17.156 04:57:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:17.413 128+0 records in 00:15:17.413 128+0 records out 00:15:17.413 65536 bytes (66 kB, 64 KiB) copied, 0.000519807 s, 126 MB/s 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:17.413 2035+0 records in 00:15:17.413 2035+0 records out 00:15:17.413 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00831683 s, 125 MB/s 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:17.413 456+0 records in 00:15:17.413 456+0 records out 00:15:17.413 233472 bytes (233 kB, 228 KiB) copied, 0.00140252 s, 166 MB/s 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:17.413 04:57:47 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:17.414 04:57:47 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@51 -- # local i 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.414 04:57:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:17.671 04:57:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.671 [2024-04-27 04:57:47.358308] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@41 -- # break 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.672 04:57:47 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:17.672 04:57:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@65 -- # true 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.931 04:57:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.931 04:57:47 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:17.931 04:57:47 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:17.931 04:57:47 -- bdev/bdev_raid.sh@111 -- # killprocess 123941 00:15:17.931 04:57:47 -- common/autotest_common.sh@926 -- # '[' -z 123941 ']' 00:15:17.931 04:57:47 -- common/autotest_common.sh@930 -- # kill -0 123941 00:15:17.931 04:57:47 -- common/autotest_common.sh@931 -- # uname 00:15:17.931 04:57:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:17.931 04:57:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123941 00:15:17.931 04:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:17.931 04:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:17.931 04:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123941' 00:15:17.931 killing process with pid 123941 00:15:17.931 04:57:47 -- common/autotest_common.sh@945 -- # kill 123941 00:15:17.931 [2024-04-27 04:57:47.692494] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.931 04:57:47 -- common/autotest_common.sh@950 -- # wait 123941 00:15:17.931 [2024-04-27 04:57:47.692835] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.931 [2024-04-27 04:57:47.693051] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.931 [2024-04-27 04:57:47.693183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:15:17.931 [2024-04-27 04:57:47.724234] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.498 04:57:48 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:18.498 00:15:18.498 real 0m3.586s 00:15:18.498 user 0m4.753s 00:15:18.498 sys 0m1.059s 00:15:18.498 04:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.498 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.498 ************************************ 00:15:18.498 END TEST raid_function_test_raid0 00:15:18.498 ************************************ 00:15:18.498 04:57:48 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:15:18.499 04:57:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:18.499 04:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.499 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.499 ************************************ 00:15:18.499 START TEST raid_function_test_concat 00:15:18.499 ************************************ 00:15:18.499 04:57:48 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@86 -- # raid_pid=124085 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 124085' 00:15:18.499 Process raid pid: 124085 00:15:18.499 04:57:48 -- bdev/bdev_raid.sh@88 -- # waitforlisten 124085 /var/tmp/spdk-raid.sock 00:15:18.499 04:57:48 -- common/autotest_common.sh@819 -- # '[' -z 124085 ']' 00:15:18.499 04:57:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.499 04:57:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.499 04:57:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.499 04:57:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.499 04:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.499 [2024-04-27 04:57:48.222749] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:18.499 [2024-04-27 04:57:48.223239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.757 [2024-04-27 04:57:48.399453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.757 [2024-04-27 04:57:48.518597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.757 [2024-04-27 04:57:48.604723] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.323 04:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.323 04:57:49 -- common/autotest_common.sh@852 -- # return 0 00:15:19.323 04:57:49 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:15:19.323 04:57:49 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:15:19.323 04:57:49 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:19.323 04:57:49 -- bdev/bdev_raid.sh@70 -- # cat 00:15:19.323 04:57:49 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:19.890 [2024-04-27 04:57:49.505838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:19.890 [2024-04-27 04:57:49.508682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:19.890 [2024-04-27 04:57:49.508946] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:19.890 [2024-04-27 04:57:49.509061] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:19.890 [2024-04-27 04:57:49.509304] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:19.890 [2024-04-27 04:57:49.509813] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:19.890 [2024-04-27 04:57:49.509967] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:15:19.890 [2024-04-27 04:57:49.510298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.890 Base_1 00:15:19.890 Base_2 00:15:19.890 04:57:49 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:19.890 04:57:49 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:19.890 04:57:49 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.148 04:57:49 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:20.148 04:57:49 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:20.148 04:57:49 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@12 -- # local i 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.148 04:57:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:20.148 [2024-04-27 04:57:50.026599] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:20.406 /dev/nbd0 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:20.406 04:57:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:20.406 04:57:50 -- common/autotest_common.sh@857 -- # local i 00:15:20.406 04:57:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:20.406 04:57:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:20.406 04:57:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:20.406 04:57:50 -- common/autotest_common.sh@861 -- # break 00:15:20.406 04:57:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:20.406 04:57:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:20.406 04:57:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.406 1+0 records in 00:15:20.406 1+0 records out 00:15:20.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408799 s, 10.0 MB/s 00:15:20.406 04:57:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.406 04:57:50 -- common/autotest_common.sh@874 -- # size=4096 00:15:20.406 04:57:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.406 04:57:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:20.406 04:57:50 -- common/autotest_common.sh@877 -- # return 0 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.406 04:57:50 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.406 04:57:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:20.664 { 00:15:20.664 "nbd_device": "/dev/nbd0", 00:15:20.664 "bdev_name": "raid" 00:15:20.664 } 00:15:20.664 ]' 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:20.664 { 00:15:20.664 "nbd_device": "/dev/nbd0", 00:15:20.664 "bdev_name": "raid" 00:15:20.664 } 00:15:20.664 ]' 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@65 -- # count=1 00:15:20.664 04:57:50 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:20.664 4096+0 records in 00:15:20.664 4096+0 records out 00:15:20.664 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0278658 s, 75.3 MB/s 00:15:20.664 04:57:50 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:20.922 4096+0 records in 00:15:20.922 4096+0 records out 00:15:20.922 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.266784 s, 7.9 MB/s 00:15:20.922 04:57:50 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:20.922 04:57:50 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:20.922 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:20.923 128+0 records in 00:15:20.923 128+0 records out 00:15:20.923 65536 bytes (66 kB, 64 KiB) copied, 0.00284618 s, 23.0 MB/s 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:20.923 2035+0 records in 00:15:20.923 2035+0 records out 00:15:20.923 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00822493 s, 127 MB/s 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:20.923 456+0 records in 00:15:20.923 456+0 records out 00:15:20.923 233472 bytes (233 kB, 228 KiB) copied, 0.00154326 s, 151 MB/s 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:20.923 04:57:50 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@51 -- # local i 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.923 04:57:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:21.527 [2024-04-27 04:57:51.078197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@41 -- # break 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.527 04:57:51 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@65 -- # true 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@65 -- # count=0 00:15:21.527 04:57:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:21.785 04:57:51 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:21.785 04:57:51 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:21.785 04:57:51 -- bdev/bdev_raid.sh@111 -- # killprocess 124085 00:15:21.785 04:57:51 -- common/autotest_common.sh@926 -- # '[' -z 124085 ']' 00:15:21.785 04:57:51 -- common/autotest_common.sh@930 -- # kill -0 124085 00:15:21.785 04:57:51 -- common/autotest_common.sh@931 -- # uname 00:15:21.785 04:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.785 04:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124085 00:15:21.785 killing process with pid 124085 00:15:21.785 04:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:21.785 04:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:21.785 04:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124085' 00:15:21.785 04:57:51 -- common/autotest_common.sh@945 -- # kill 124085 00:15:21.785 [2024-04-27 04:57:51.443203] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.785 04:57:51 -- common/autotest_common.sh@950 -- # wait 124085 00:15:21.785 [2024-04-27 04:57:51.443348] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.785 [2024-04-27 04:57:51.443430] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.785 [2024-04-27 04:57:51.443458] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:15:21.785 [2024-04-27 04:57:51.472364] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.043 ************************************ 00:15:22.043 END TEST raid_function_test_concat 00:15:22.043 ************************************ 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:22.043 00:15:22.043 real 0m3.660s 00:15:22.043 user 0m4.932s 00:15:22.043 sys 0m1.063s 00:15:22.043 04:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.043 04:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:15:22.043 04:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:22.043 04:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:22.043 04:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.043 ************************************ 00:15:22.043 START TEST raid0_resize_test 00:15:22.043 ************************************ 00:15:22.043 04:57:51 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@301 -- # raid_pid=124243 00:15:22.043 Process raid pid: 124243 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 124243' 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:22.043 04:57:51 -- bdev/bdev_raid.sh@303 -- # waitforlisten 124243 /var/tmp/spdk-raid.sock 00:15:22.043 04:57:51 -- common/autotest_common.sh@819 -- # '[' -z 124243 ']' 00:15:22.043 04:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.043 04:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:22.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.043 04:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.043 04:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:22.043 04:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.043 [2024-04-27 04:57:51.932020] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:22.043 [2024-04-27 04:57:51.932318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.301 [2024-04-27 04:57:52.110342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.559 [2024-04-27 04:57:52.213482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.559 [2024-04-27 04:57:52.293971] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.124 04:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:23.124 04:57:52 -- common/autotest_common.sh@852 -- # return 0 00:15:23.124 04:57:52 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:23.381 Base_1 00:15:23.381 04:57:53 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:23.638 Base_2 00:15:23.638 04:57:53 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:23.895 [2024-04-27 04:57:53.572281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:23.895 [2024-04-27 04:57:53.574682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:23.895 [2024-04-27 04:57:53.574782] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:23.895 [2024-04-27 04:57:53.574794] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:23.895 [2024-04-27 04:57:53.575039] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:15:23.895 [2024-04-27 04:57:53.575419] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:23.895 [2024-04-27 04:57:53.575433] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:15:23.895 [2024-04-27 04:57:53.575643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.895 04:57:53 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:23.895 [2024-04-27 04:57:53.788396] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:23.895 [2024-04-27 04:57:53.788485] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:24.152 true 00:15:24.152 04:57:53 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:24.152 04:57:53 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:15:24.152 [2024-04-27 04:57:54.004478] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.152 04:57:54 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:15:24.152 04:57:54 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:15:24.152 04:57:54 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:15:24.152 04:57:54 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:24.410 [2024-04-27 04:57:54.252386] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:24.410 [2024-04-27 04:57:54.252481] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:24.410 [2024-04-27 04:57:54.252543] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:15:24.410 [2024-04-27 04:57:54.252630] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:24.410 true 00:15:24.410 04:57:54 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:24.410 04:57:54 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:15:24.668 [2024-04-27 04:57:54.476646] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.668 04:57:54 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:15:24.668 04:57:54 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:15:24.668 04:57:54 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:15:24.668 04:57:54 -- bdev/bdev_raid.sh@332 -- # killprocess 124243 00:15:24.668 04:57:54 -- common/autotest_common.sh@926 -- # '[' -z 124243 ']' 00:15:24.668 04:57:54 -- common/autotest_common.sh@930 -- # kill -0 124243 00:15:24.668 04:57:54 -- common/autotest_common.sh@931 -- # uname 00:15:24.668 04:57:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:24.668 04:57:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124243 00:15:24.668 04:57:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:24.668 killing process with pid 124243 00:15:24.668 04:57:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:24.668 04:57:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124243' 00:15:24.668 04:57:54 -- common/autotest_common.sh@945 -- # kill 124243 00:15:24.668 [2024-04-27 04:57:54.522337] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.668 04:57:54 -- common/autotest_common.sh@950 -- # wait 124243 00:15:24.668 [2024-04-27 04:57:54.522480] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.668 [2024-04-27 04:57:54.522558] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.668 [2024-04-27 04:57:54.522573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:15:24.668 [2024-04-27 04:57:54.523194] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@334 -- # return 0 00:15:25.236 00:15:25.236 real 0m2.998s 00:15:25.236 user 0m4.482s 00:15:25.236 sys 0m0.609s 00:15:25.236 ************************************ 00:15:25.236 END TEST raid0_resize_test 00:15:25.236 ************************************ 00:15:25.236 04:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.236 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:25.236 04:57:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:25.236 04:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.236 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.236 ************************************ 00:15:25.236 START TEST raid_state_function_test 00:15:25.236 ************************************ 00:15:25.236 04:57:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=124319 00:15:25.236 Process raid pid: 124319 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124319' 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124319 /var/tmp/spdk-raid.sock 00:15:25.236 04:57:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:25.236 04:57:54 -- common/autotest_common.sh@819 -- # '[' -z 124319 ']' 00:15:25.236 04:57:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:25.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:25.236 04:57:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.236 04:57:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:25.236 04:57:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.236 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:15:25.236 [2024-04-27 04:57:54.989128] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:25.236 [2024-04-27 04:57:54.989374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.494 [2024-04-27 04:57:55.161216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.494 [2024-04-27 04:57:55.272754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.494 [2024-04-27 04:57:55.354113] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.061 04:57:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:26.061 04:57:55 -- common/autotest_common.sh@852 -- # return 0 00:15:26.061 04:57:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:26.320 [2024-04-27 04:57:56.157500] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.320 [2024-04-27 04:57:56.157621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.320 [2024-04-27 04:57:56.157652] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.320 [2024-04-27 04:57:56.157673] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.320 04:57:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.579 04:57:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.579 "name": "Existed_Raid", 00:15:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.579 "strip_size_kb": 64, 00:15:26.579 "state": "configuring", 00:15:26.579 "raid_level": "raid0", 00:15:26.579 "superblock": false, 00:15:26.579 "num_base_bdevs": 2, 00:15:26.579 "num_base_bdevs_discovered": 0, 00:15:26.579 "num_base_bdevs_operational": 2, 00:15:26.579 "base_bdevs_list": [ 00:15:26.579 { 00:15:26.579 "name": "BaseBdev1", 00:15:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.579 "is_configured": false, 00:15:26.579 "data_offset": 0, 00:15:26.579 "data_size": 0 00:15:26.579 }, 00:15:26.579 { 00:15:26.579 "name": "BaseBdev2", 00:15:26.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.579 "is_configured": false, 00:15:26.579 "data_offset": 0, 00:15:26.579 "data_size": 0 00:15:26.579 } 00:15:26.579 ] 00:15:26.579 }' 00:15:26.579 04:57:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.579 04:57:56 -- common/autotest_common.sh@10 -- # set +x 00:15:27.516 04:57:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:27.516 [2024-04-27 04:57:57.345634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.516 [2024-04-27 04:57:57.345726] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:27.516 04:57:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:27.775 [2024-04-27 04:57:57.649711] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.775 [2024-04-27 04:57:57.649854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.775 [2024-04-27 04:57:57.649886] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.775 [2024-04-27 04:57:57.649915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.775 04:57:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.035 [2024-04-27 04:57:57.884296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.035 BaseBdev1 00:15:28.035 04:57:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:28.035 04:57:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:28.035 04:57:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:28.035 04:57:57 -- common/autotest_common.sh@889 -- # local i 00:15:28.035 04:57:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:28.035 04:57:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:28.035 04:57:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.295 04:57:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.554 [ 00:15:28.554 { 00:15:28.554 "name": "BaseBdev1", 00:15:28.554 "aliases": [ 00:15:28.554 "6e3ff624-861a-4cf7-875e-6cac19c614d6" 00:15:28.554 ], 00:15:28.554 "product_name": "Malloc disk", 00:15:28.554 "block_size": 512, 00:15:28.554 "num_blocks": 65536, 00:15:28.554 "uuid": "6e3ff624-861a-4cf7-875e-6cac19c614d6", 00:15:28.554 "assigned_rate_limits": { 00:15:28.554 "rw_ios_per_sec": 0, 00:15:28.554 "rw_mbytes_per_sec": 0, 00:15:28.554 "r_mbytes_per_sec": 0, 00:15:28.554 "w_mbytes_per_sec": 0 00:15:28.554 }, 00:15:28.554 "claimed": true, 00:15:28.554 "claim_type": "exclusive_write", 00:15:28.554 "zoned": false, 00:15:28.554 "supported_io_types": { 00:15:28.554 "read": true, 00:15:28.554 "write": true, 00:15:28.554 "unmap": true, 00:15:28.554 "write_zeroes": true, 00:15:28.554 "flush": true, 00:15:28.554 "reset": true, 00:15:28.554 "compare": false, 00:15:28.554 "compare_and_write": false, 00:15:28.554 "abort": true, 00:15:28.554 "nvme_admin": false, 00:15:28.554 "nvme_io": false 00:15:28.554 }, 00:15:28.554 "memory_domains": [ 00:15:28.554 { 00:15:28.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.554 "dma_device_type": 2 00:15:28.554 } 00:15:28.554 ], 00:15:28.554 "driver_specific": {} 00:15:28.554 } 00:15:28.554 ] 00:15:28.554 04:57:58 -- common/autotest_common.sh@895 -- # return 0 00:15:28.554 04:57:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:28.554 04:57:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.554 04:57:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.554 04:57:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.555 04:57:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.814 04:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.814 "name": "Existed_Raid", 00:15:28.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.814 "strip_size_kb": 64, 00:15:28.814 "state": "configuring", 00:15:28.814 "raid_level": "raid0", 00:15:28.814 "superblock": false, 00:15:28.814 "num_base_bdevs": 2, 00:15:28.814 "num_base_bdevs_discovered": 1, 00:15:28.814 "num_base_bdevs_operational": 2, 00:15:28.814 "base_bdevs_list": [ 00:15:28.814 { 00:15:28.814 "name": "BaseBdev1", 00:15:28.814 "uuid": "6e3ff624-861a-4cf7-875e-6cac19c614d6", 00:15:28.814 "is_configured": true, 00:15:28.814 "data_offset": 0, 00:15:28.814 "data_size": 65536 00:15:28.814 }, 00:15:28.814 { 00:15:28.814 "name": "BaseBdev2", 00:15:28.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.814 "is_configured": false, 00:15:28.814 "data_offset": 0, 00:15:28.814 "data_size": 0 00:15:28.814 } 00:15:28.814 ] 00:15:28.814 }' 00:15:28.814 04:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.814 04:57:58 -- common/autotest_common.sh@10 -- # set +x 00:15:29.382 04:57:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.641 [2024-04-27 04:57:59.412851] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.641 [2024-04-27 04:57:59.412939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:29.641 04:57:59 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:29.641 04:57:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.900 [2024-04-27 04:57:59.629014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.900 [2024-04-27 04:57:59.631573] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.900 [2024-04-27 04:57:59.631661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.900 04:57:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.160 04:57:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.160 "name": "Existed_Raid", 00:15:30.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.160 "strip_size_kb": 64, 00:15:30.160 "state": "configuring", 00:15:30.160 "raid_level": "raid0", 00:15:30.160 "superblock": false, 00:15:30.160 "num_base_bdevs": 2, 00:15:30.160 "num_base_bdevs_discovered": 1, 00:15:30.160 "num_base_bdevs_operational": 2, 00:15:30.160 "base_bdevs_list": [ 00:15:30.160 { 00:15:30.160 "name": "BaseBdev1", 00:15:30.160 "uuid": "6e3ff624-861a-4cf7-875e-6cac19c614d6", 00:15:30.160 "is_configured": true, 00:15:30.160 "data_offset": 0, 00:15:30.160 "data_size": 65536 00:15:30.160 }, 00:15:30.160 { 00:15:30.160 "name": "BaseBdev2", 00:15:30.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.160 "is_configured": false, 00:15:30.160 "data_offset": 0, 00:15:30.160 "data_size": 0 00:15:30.160 } 00:15:30.160 ] 00:15:30.160 }' 00:15:30.160 04:57:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.160 04:57:59 -- common/autotest_common.sh@10 -- # set +x 00:15:30.773 04:58:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.038 [2024-04-27 04:58:00.834721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.038 [2024-04-27 04:58:00.834827] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:31.038 [2024-04-27 04:58:00.834845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:31.038 [2024-04-27 04:58:00.835117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:31.038 [2024-04-27 04:58:00.835847] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:31.038 [2024-04-27 04:58:00.835887] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:31.038 [2024-04-27 04:58:00.836370] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.038 BaseBdev2 00:15:31.038 04:58:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:31.038 04:58:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:31.038 04:58:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.038 04:58:00 -- common/autotest_common.sh@889 -- # local i 00:15:31.038 04:58:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.038 04:58:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.038 04:58:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.297 04:58:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.556 [ 00:15:31.556 { 00:15:31.556 "name": "BaseBdev2", 00:15:31.556 "aliases": [ 00:15:31.556 "f286273d-1069-43fd-8954-1f57cca9a7f7" 00:15:31.556 ], 00:15:31.556 "product_name": "Malloc disk", 00:15:31.556 "block_size": 512, 00:15:31.556 "num_blocks": 65536, 00:15:31.556 "uuid": "f286273d-1069-43fd-8954-1f57cca9a7f7", 00:15:31.556 "assigned_rate_limits": { 00:15:31.556 "rw_ios_per_sec": 0, 00:15:31.556 "rw_mbytes_per_sec": 0, 00:15:31.556 "r_mbytes_per_sec": 0, 00:15:31.556 "w_mbytes_per_sec": 0 00:15:31.556 }, 00:15:31.556 "claimed": true, 00:15:31.556 "claim_type": "exclusive_write", 00:15:31.556 "zoned": false, 00:15:31.556 "supported_io_types": { 00:15:31.556 "read": true, 00:15:31.556 "write": true, 00:15:31.556 "unmap": true, 00:15:31.556 "write_zeroes": true, 00:15:31.556 "flush": true, 00:15:31.556 "reset": true, 00:15:31.556 "compare": false, 00:15:31.556 "compare_and_write": false, 00:15:31.556 "abort": true, 00:15:31.556 "nvme_admin": false, 00:15:31.556 "nvme_io": false 00:15:31.556 }, 00:15:31.556 "memory_domains": [ 00:15:31.556 { 00:15:31.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.556 "dma_device_type": 2 00:15:31.556 } 00:15:31.556 ], 00:15:31.556 "driver_specific": {} 00:15:31.556 } 00:15:31.556 ] 00:15:31.556 04:58:01 -- common/autotest_common.sh@895 -- # return 0 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.556 04:58:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.816 04:58:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.816 "name": "Existed_Raid", 00:15:31.816 "uuid": "1d679c5e-4483-4f70-a54e-09046ff42a2a", 00:15:31.816 "strip_size_kb": 64, 00:15:31.816 "state": "online", 00:15:31.816 "raid_level": "raid0", 00:15:31.816 "superblock": false, 00:15:31.816 "num_base_bdevs": 2, 00:15:31.816 "num_base_bdevs_discovered": 2, 00:15:31.816 "num_base_bdevs_operational": 2, 00:15:31.816 "base_bdevs_list": [ 00:15:31.816 { 00:15:31.816 "name": "BaseBdev1", 00:15:31.816 "uuid": "6e3ff624-861a-4cf7-875e-6cac19c614d6", 00:15:31.816 "is_configured": true, 00:15:31.816 "data_offset": 0, 00:15:31.816 "data_size": 65536 00:15:31.816 }, 00:15:31.816 { 00:15:31.816 "name": "BaseBdev2", 00:15:31.816 "uuid": "f286273d-1069-43fd-8954-1f57cca9a7f7", 00:15:31.816 "is_configured": true, 00:15:31.816 "data_offset": 0, 00:15:31.816 "data_size": 65536 00:15:31.816 } 00:15:31.816 ] 00:15:31.816 }' 00:15:31.816 04:58:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.816 04:58:01 -- common/autotest_common.sh@10 -- # set +x 00:15:32.466 04:58:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.725 [2024-04-27 04:58:02.459300] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.725 [2024-04-27 04:58:02.459361] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.725 [2024-04-27 04:58:02.459477] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.725 04:58:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.984 04:58:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.984 "name": "Existed_Raid", 00:15:32.984 "uuid": "1d679c5e-4483-4f70-a54e-09046ff42a2a", 00:15:32.984 "strip_size_kb": 64, 00:15:32.984 "state": "offline", 00:15:32.984 "raid_level": "raid0", 00:15:32.984 "superblock": false, 00:15:32.984 "num_base_bdevs": 2, 00:15:32.984 "num_base_bdevs_discovered": 1, 00:15:32.984 "num_base_bdevs_operational": 1, 00:15:32.984 "base_bdevs_list": [ 00:15:32.984 { 00:15:32.984 "name": null, 00:15:32.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.984 "is_configured": false, 00:15:32.984 "data_offset": 0, 00:15:32.984 "data_size": 65536 00:15:32.984 }, 00:15:32.984 { 00:15:32.984 "name": "BaseBdev2", 00:15:32.984 "uuid": "f286273d-1069-43fd-8954-1f57cca9a7f7", 00:15:32.984 "is_configured": true, 00:15:32.984 "data_offset": 0, 00:15:32.984 "data_size": 65536 00:15:32.984 } 00:15:32.984 ] 00:15:32.984 }' 00:15:32.984 04:58:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.984 04:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:33.550 04:58:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:33.550 04:58:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.550 04:58:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.550 04:58:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:33.807 04:58:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:33.807 04:58:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.807 04:58:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:34.064 [2024-04-27 04:58:03.796028] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.064 [2024-04-27 04:58:03.796156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:34.064 04:58:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.064 04:58:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.064 04:58:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.064 04:58:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.322 04:58:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:34.322 04:58:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:34.322 04:58:04 -- bdev/bdev_raid.sh@287 -- # killprocess 124319 00:15:34.322 04:58:04 -- common/autotest_common.sh@926 -- # '[' -z 124319 ']' 00:15:34.322 04:58:04 -- common/autotest_common.sh@930 -- # kill -0 124319 00:15:34.322 04:58:04 -- common/autotest_common.sh@931 -- # uname 00:15:34.322 04:58:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:34.322 04:58:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124319 00:15:34.322 04:58:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:34.322 killing process with pid 124319 00:15:34.322 04:58:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:34.322 04:58:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124319' 00:15:34.322 04:58:04 -- common/autotest_common.sh@945 -- # kill 124319 00:15:34.322 [2024-04-27 04:58:04.083409] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.322 04:58:04 -- common/autotest_common.sh@950 -- # wait 124319 00:15:34.322 [2024-04-27 04:58:04.083517] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:34.887 00:15:34.887 real 0m9.558s 00:15:34.887 user 0m17.101s 00:15:34.887 sys 0m1.329s 00:15:34.887 ************************************ 00:15:34.887 END TEST raid_state_function_test 00:15:34.887 ************************************ 00:15:34.887 04:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.887 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:34.887 04:58:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:34.887 04:58:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:34.887 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:34.887 ************************************ 00:15:34.887 START TEST raid_state_function_test_sb 00:15:34.887 ************************************ 00:15:34.887 04:58:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=124641 00:15:34.887 Process raid pid: 124641 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124641' 00:15:34.887 04:58:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124641 /var/tmp/spdk-raid.sock 00:15:34.887 04:58:04 -- common/autotest_common.sh@819 -- # '[' -z 124641 ']' 00:15:34.887 04:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:34.887 04:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:34.887 04:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:34.887 04:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:34.887 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:34.887 [2024-04-27 04:58:04.612697] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:34.887 [2024-04-27 04:58:04.612954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.145 [2024-04-27 04:58:04.788293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.145 [2024-04-27 04:58:04.921162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.145 [2024-04-27 04:58:05.015720] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.078 04:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:36.079 04:58:05 -- common/autotest_common.sh@852 -- # return 0 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:36.079 [2024-04-27 04:58:05.846367] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.079 [2024-04-27 04:58:05.846497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.079 [2024-04-27 04:58:05.846528] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.079 [2024-04-27 04:58:05.846550] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.079 04:58:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.337 04:58:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.337 "name": "Existed_Raid", 00:15:36.337 "uuid": "c747da43-81a1-4c24-a6d4-2368e3827b00", 00:15:36.337 "strip_size_kb": 64, 00:15:36.337 "state": "configuring", 00:15:36.337 "raid_level": "raid0", 00:15:36.337 "superblock": true, 00:15:36.337 "num_base_bdevs": 2, 00:15:36.337 "num_base_bdevs_discovered": 0, 00:15:36.337 "num_base_bdevs_operational": 2, 00:15:36.337 "base_bdevs_list": [ 00:15:36.337 { 00:15:36.337 "name": "BaseBdev1", 00:15:36.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.337 "is_configured": false, 00:15:36.337 "data_offset": 0, 00:15:36.337 "data_size": 0 00:15:36.337 }, 00:15:36.337 { 00:15:36.337 "name": "BaseBdev2", 00:15:36.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.337 "is_configured": false, 00:15:36.337 "data_offset": 0, 00:15:36.337 "data_size": 0 00:15:36.337 } 00:15:36.337 ] 00:15:36.337 }' 00:15:36.337 04:58:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.337 04:58:06 -- common/autotest_common.sh@10 -- # set +x 00:15:36.903 04:58:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.161 [2024-04-27 04:58:06.958440] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.161 [2024-04-27 04:58:06.958525] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:37.161 04:58:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:37.441 [2024-04-27 04:58:07.218589] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.441 [2024-04-27 04:58:07.218762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.441 [2024-04-27 04:58:07.218793] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.441 [2024-04-27 04:58:07.218822] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.441 04:58:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.708 [2024-04-27 04:58:07.446095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.708 BaseBdev1 00:15:37.708 04:58:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:37.708 04:58:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:37.708 04:58:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:37.708 04:58:07 -- common/autotest_common.sh@889 -- # local i 00:15:37.708 04:58:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:37.708 04:58:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:37.708 04:58:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.966 04:58:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.224 [ 00:15:38.224 { 00:15:38.224 "name": "BaseBdev1", 00:15:38.224 "aliases": [ 00:15:38.224 "872c3cd5-d2d1-4cbf-b09e-98134d91a32f" 00:15:38.224 ], 00:15:38.224 "product_name": "Malloc disk", 00:15:38.224 "block_size": 512, 00:15:38.224 "num_blocks": 65536, 00:15:38.224 "uuid": "872c3cd5-d2d1-4cbf-b09e-98134d91a32f", 00:15:38.224 "assigned_rate_limits": { 00:15:38.224 "rw_ios_per_sec": 0, 00:15:38.224 "rw_mbytes_per_sec": 0, 00:15:38.224 "r_mbytes_per_sec": 0, 00:15:38.224 "w_mbytes_per_sec": 0 00:15:38.224 }, 00:15:38.224 "claimed": true, 00:15:38.224 "claim_type": "exclusive_write", 00:15:38.224 "zoned": false, 00:15:38.224 "supported_io_types": { 00:15:38.224 "read": true, 00:15:38.224 "write": true, 00:15:38.224 "unmap": true, 00:15:38.224 "write_zeroes": true, 00:15:38.224 "flush": true, 00:15:38.224 "reset": true, 00:15:38.224 "compare": false, 00:15:38.224 "compare_and_write": false, 00:15:38.224 "abort": true, 00:15:38.224 "nvme_admin": false, 00:15:38.224 "nvme_io": false 00:15:38.224 }, 00:15:38.224 "memory_domains": [ 00:15:38.224 { 00:15:38.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.224 "dma_device_type": 2 00:15:38.224 } 00:15:38.224 ], 00:15:38.224 "driver_specific": {} 00:15:38.224 } 00:15:38.224 ] 00:15:38.224 04:58:07 -- common/autotest_common.sh@895 -- # return 0 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.224 04:58:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.483 04:58:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.483 "name": "Existed_Raid", 00:15:38.483 "uuid": "0ee169e8-52c4-402e-975d-be855987e20d", 00:15:38.483 "strip_size_kb": 64, 00:15:38.483 "state": "configuring", 00:15:38.483 "raid_level": "raid0", 00:15:38.483 "superblock": true, 00:15:38.483 "num_base_bdevs": 2, 00:15:38.483 "num_base_bdevs_discovered": 1, 00:15:38.483 "num_base_bdevs_operational": 2, 00:15:38.483 "base_bdevs_list": [ 00:15:38.483 { 00:15:38.483 "name": "BaseBdev1", 00:15:38.483 "uuid": "872c3cd5-d2d1-4cbf-b09e-98134d91a32f", 00:15:38.483 "is_configured": true, 00:15:38.483 "data_offset": 2048, 00:15:38.483 "data_size": 63488 00:15:38.483 }, 00:15:38.483 { 00:15:38.483 "name": "BaseBdev2", 00:15:38.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.483 "is_configured": false, 00:15:38.483 "data_offset": 0, 00:15:38.483 "data_size": 0 00:15:38.483 } 00:15:38.483 ] 00:15:38.483 }' 00:15:38.483 04:58:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.483 04:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:39.049 04:58:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:39.307 [2024-04-27 04:58:09.082668] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.307 [2024-04-27 04:58:09.082785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:39.307 04:58:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:39.307 04:58:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.565 04:58:09 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.823 BaseBdev1 00:15:39.823 04:58:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:39.823 04:58:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:39.823 04:58:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:39.823 04:58:09 -- common/autotest_common.sh@889 -- # local i 00:15:39.823 04:58:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:39.823 04:58:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:39.823 04:58:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.082 04:58:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.340 [ 00:15:40.340 { 00:15:40.340 "name": "BaseBdev1", 00:15:40.340 "aliases": [ 00:15:40.340 "94005cd8-a773-41e9-911c-6883a25e887d" 00:15:40.340 ], 00:15:40.340 "product_name": "Malloc disk", 00:15:40.340 "block_size": 512, 00:15:40.340 "num_blocks": 65536, 00:15:40.340 "uuid": "94005cd8-a773-41e9-911c-6883a25e887d", 00:15:40.340 "assigned_rate_limits": { 00:15:40.340 "rw_ios_per_sec": 0, 00:15:40.340 "rw_mbytes_per_sec": 0, 00:15:40.340 "r_mbytes_per_sec": 0, 00:15:40.340 "w_mbytes_per_sec": 0 00:15:40.340 }, 00:15:40.340 "claimed": false, 00:15:40.340 "zoned": false, 00:15:40.340 "supported_io_types": { 00:15:40.340 "read": true, 00:15:40.340 "write": true, 00:15:40.340 "unmap": true, 00:15:40.340 "write_zeroes": true, 00:15:40.340 "flush": true, 00:15:40.340 "reset": true, 00:15:40.340 "compare": false, 00:15:40.340 "compare_and_write": false, 00:15:40.340 "abort": true, 00:15:40.340 "nvme_admin": false, 00:15:40.340 "nvme_io": false 00:15:40.340 }, 00:15:40.340 "memory_domains": [ 00:15:40.340 { 00:15:40.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.340 "dma_device_type": 2 00:15:40.340 } 00:15:40.340 ], 00:15:40.340 "driver_specific": {} 00:15:40.340 } 00:15:40.340 ] 00:15:40.340 04:58:10 -- common/autotest_common.sh@895 -- # return 0 00:15:40.340 04:58:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:40.598 [2024-04-27 04:58:10.285904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.598 [2024-04-27 04:58:10.288255] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.598 [2024-04-27 04:58:10.288342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.598 04:58:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.885 04:58:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.885 "name": "Existed_Raid", 00:15:40.885 "uuid": "964626d2-7f68-46be-ac62-f4c2b9c5e48c", 00:15:40.885 "strip_size_kb": 64, 00:15:40.885 "state": "configuring", 00:15:40.885 "raid_level": "raid0", 00:15:40.885 "superblock": true, 00:15:40.885 "num_base_bdevs": 2, 00:15:40.885 "num_base_bdevs_discovered": 1, 00:15:40.885 "num_base_bdevs_operational": 2, 00:15:40.885 "base_bdevs_list": [ 00:15:40.885 { 00:15:40.885 "name": "BaseBdev1", 00:15:40.885 "uuid": "94005cd8-a773-41e9-911c-6883a25e887d", 00:15:40.885 "is_configured": true, 00:15:40.885 "data_offset": 2048, 00:15:40.885 "data_size": 63488 00:15:40.885 }, 00:15:40.885 { 00:15:40.885 "name": "BaseBdev2", 00:15:40.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.885 "is_configured": false, 00:15:40.885 "data_offset": 0, 00:15:40.885 "data_size": 0 00:15:40.885 } 00:15:40.885 ] 00:15:40.885 }' 00:15:40.885 04:58:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.885 04:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:41.450 04:58:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.707 [2024-04-27 04:58:11.470324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.707 [2024-04-27 04:58:11.470654] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:41.707 [2024-04-27 04:58:11.470677] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.707 [2024-04-27 04:58:11.470893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:41.707 [2024-04-27 04:58:11.471456] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:41.708 [2024-04-27 04:58:11.471484] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:41.708 [2024-04-27 04:58:11.471705] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.708 BaseBdev2 00:15:41.708 04:58:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:41.708 04:58:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:41.708 04:58:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:41.708 04:58:11 -- common/autotest_common.sh@889 -- # local i 00:15:41.708 04:58:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:41.708 04:58:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:41.708 04:58:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.965 04:58:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.223 [ 00:15:42.223 { 00:15:42.223 "name": "BaseBdev2", 00:15:42.223 "aliases": [ 00:15:42.223 "adc62e8b-bb6c-4e4d-9e9c-6fae2f1307bd" 00:15:42.223 ], 00:15:42.223 "product_name": "Malloc disk", 00:15:42.223 "block_size": 512, 00:15:42.223 "num_blocks": 65536, 00:15:42.223 "uuid": "adc62e8b-bb6c-4e4d-9e9c-6fae2f1307bd", 00:15:42.223 "assigned_rate_limits": { 00:15:42.223 "rw_ios_per_sec": 0, 00:15:42.223 "rw_mbytes_per_sec": 0, 00:15:42.223 "r_mbytes_per_sec": 0, 00:15:42.223 "w_mbytes_per_sec": 0 00:15:42.223 }, 00:15:42.223 "claimed": true, 00:15:42.223 "claim_type": "exclusive_write", 00:15:42.223 "zoned": false, 00:15:42.223 "supported_io_types": { 00:15:42.223 "read": true, 00:15:42.223 "write": true, 00:15:42.223 "unmap": true, 00:15:42.223 "write_zeroes": true, 00:15:42.223 "flush": true, 00:15:42.223 "reset": true, 00:15:42.223 "compare": false, 00:15:42.223 "compare_and_write": false, 00:15:42.223 "abort": true, 00:15:42.223 "nvme_admin": false, 00:15:42.223 "nvme_io": false 00:15:42.223 }, 00:15:42.223 "memory_domains": [ 00:15:42.223 { 00:15:42.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.224 "dma_device_type": 2 00:15:42.224 } 00:15:42.224 ], 00:15:42.224 "driver_specific": {} 00:15:42.224 } 00:15:42.224 ] 00:15:42.224 04:58:11 -- common/autotest_common.sh@895 -- # return 0 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.224 04:58:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.484 04:58:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.484 "name": "Existed_Raid", 00:15:42.484 "uuid": "964626d2-7f68-46be-ac62-f4c2b9c5e48c", 00:15:42.484 "strip_size_kb": 64, 00:15:42.485 "state": "online", 00:15:42.485 "raid_level": "raid0", 00:15:42.485 "superblock": true, 00:15:42.485 "num_base_bdevs": 2, 00:15:42.485 "num_base_bdevs_discovered": 2, 00:15:42.485 "num_base_bdevs_operational": 2, 00:15:42.485 "base_bdevs_list": [ 00:15:42.485 { 00:15:42.485 "name": "BaseBdev1", 00:15:42.485 "uuid": "94005cd8-a773-41e9-911c-6883a25e887d", 00:15:42.485 "is_configured": true, 00:15:42.485 "data_offset": 2048, 00:15:42.485 "data_size": 63488 00:15:42.485 }, 00:15:42.485 { 00:15:42.485 "name": "BaseBdev2", 00:15:42.485 "uuid": "adc62e8b-bb6c-4e4d-9e9c-6fae2f1307bd", 00:15:42.485 "is_configured": true, 00:15:42.485 "data_offset": 2048, 00:15:42.485 "data_size": 63488 00:15:42.485 } 00:15:42.485 ] 00:15:42.485 }' 00:15:42.485 04:58:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.485 04:58:12 -- common/autotest_common.sh@10 -- # set +x 00:15:43.080 04:58:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:43.337 [2024-04-27 04:58:13.094969] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.337 [2024-04-27 04:58:13.095026] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.337 [2024-04-27 04:58:13.095136] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.337 04:58:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.595 04:58:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.595 "name": "Existed_Raid", 00:15:43.595 "uuid": "964626d2-7f68-46be-ac62-f4c2b9c5e48c", 00:15:43.595 "strip_size_kb": 64, 00:15:43.595 "state": "offline", 00:15:43.595 "raid_level": "raid0", 00:15:43.595 "superblock": true, 00:15:43.595 "num_base_bdevs": 2, 00:15:43.595 "num_base_bdevs_discovered": 1, 00:15:43.595 "num_base_bdevs_operational": 1, 00:15:43.595 "base_bdevs_list": [ 00:15:43.595 { 00:15:43.595 "name": null, 00:15:43.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.595 "is_configured": false, 00:15:43.595 "data_offset": 2048, 00:15:43.595 "data_size": 63488 00:15:43.595 }, 00:15:43.595 { 00:15:43.595 "name": "BaseBdev2", 00:15:43.595 "uuid": "adc62e8b-bb6c-4e4d-9e9c-6fae2f1307bd", 00:15:43.595 "is_configured": true, 00:15:43.595 "data_offset": 2048, 00:15:43.595 "data_size": 63488 00:15:43.595 } 00:15:43.595 ] 00:15:43.595 }' 00:15:43.595 04:58:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.595 04:58:13 -- common/autotest_common.sh@10 -- # set +x 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:44.527 04:58:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:44.784 [2024-04-27 04:58:14.606107] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:44.784 [2024-04-27 04:58:14.606241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:44.785 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:44.785 04:58:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:44.785 04:58:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.785 04:58:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:45.043 04:58:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:45.043 04:58:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:45.043 04:58:14 -- bdev/bdev_raid.sh@287 -- # killprocess 124641 00:15:45.043 04:58:14 -- common/autotest_common.sh@926 -- # '[' -z 124641 ']' 00:15:45.043 04:58:14 -- common/autotest_common.sh@930 -- # kill -0 124641 00:15:45.043 04:58:14 -- common/autotest_common.sh@931 -- # uname 00:15:45.043 04:58:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.043 04:58:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124641 00:15:45.043 killing process with pid 124641 00:15:45.043 04:58:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:45.043 04:58:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:45.043 04:58:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124641' 00:15:45.043 04:58:14 -- common/autotest_common.sh@945 -- # kill 124641 00:15:45.043 04:58:14 -- common/autotest_common.sh@950 -- # wait 124641 00:15:45.043 [2024-04-27 04:58:14.915502] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.043 [2024-04-27 04:58:14.915620] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.609 ************************************ 00:15:45.609 END TEST raid_state_function_test_sb 00:15:45.609 ************************************ 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:45.609 00:15:45.609 real 0m10.789s 00:15:45.609 user 0m19.245s 00:15:45.609 sys 0m1.626s 00:15:45.609 04:58:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.609 04:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:45.609 04:58:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:45.609 04:58:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.609 04:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 ************************************ 00:15:45.609 START TEST raid_superblock_test 00:15:45.609 ************************************ 00:15:45.609 04:58:15 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=124965 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:45.609 04:58:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124965 /var/tmp/spdk-raid.sock 00:15:45.609 04:58:15 -- common/autotest_common.sh@819 -- # '[' -z 124965 ']' 00:15:45.609 04:58:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:45.609 04:58:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:45.609 04:58:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:45.609 04:58:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.609 04:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:45.609 [2024-04-27 04:58:15.447926] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:45.610 [2024-04-27 04:58:15.448182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124965 ] 00:15:45.867 [2024-04-27 04:58:15.607020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.867 [2024-04-27 04:58:15.728098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.125 [2024-04-27 04:58:15.812485] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.691 04:58:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.691 04:58:16 -- common/autotest_common.sh@852 -- # return 0 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.691 04:58:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:46.949 malloc1 00:15:46.949 04:58:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.207 [2024-04-27 04:58:16.874312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.207 [2024-04-27 04:58:16.874455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.207 [2024-04-27 04:58:16.874497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:47.207 [2024-04-27 04:58:16.874566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.207 [2024-04-27 04:58:16.877622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.207 [2024-04-27 04:58:16.877677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.207 pt1 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:47.207 04:58:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:47.207 malloc2 00:15:47.466 04:58:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.466 [2024-04-27 04:58:17.349909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.466 [2024-04-27 04:58:17.350055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.466 [2024-04-27 04:58:17.350109] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:47.466 [2024-04-27 04:58:17.350198] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.466 [2024-04-27 04:58:17.353325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.466 [2024-04-27 04:58:17.353395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.466 pt2 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:47.724 [2024-04-27 04:58:17.594298] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.724 [2024-04-27 04:58:17.597001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.724 [2024-04-27 04:58:17.597269] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:47.724 [2024-04-27 04:58:17.597287] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.724 [2024-04-27 04:58:17.597470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:47.724 [2024-04-27 04:58:17.597980] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:47.724 [2024-04-27 04:58:17.598007] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:47.724 [2024-04-27 04:58:17.598266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.724 04:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.983 04:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.983 "name": "raid_bdev1", 00:15:47.983 "uuid": "9d569460-12c0-4877-83ad-ecacc0f69e56", 00:15:47.983 "strip_size_kb": 64, 00:15:47.983 "state": "online", 00:15:47.983 "raid_level": "raid0", 00:15:47.983 "superblock": true, 00:15:47.983 "num_base_bdevs": 2, 00:15:47.983 "num_base_bdevs_discovered": 2, 00:15:47.983 "num_base_bdevs_operational": 2, 00:15:47.983 "base_bdevs_list": [ 00:15:47.983 { 00:15:47.983 "name": "pt1", 00:15:47.983 "uuid": "4d47e606-4919-5662-8724-0502699f6a04", 00:15:47.983 "is_configured": true, 00:15:47.983 "data_offset": 2048, 00:15:47.983 "data_size": 63488 00:15:47.983 }, 00:15:47.983 { 00:15:47.983 "name": "pt2", 00:15:47.983 "uuid": "5ae3f168-4398-593f-86dd-810d20b759a4", 00:15:47.983 "is_configured": true, 00:15:47.983 "data_offset": 2048, 00:15:47.983 "data_size": 63488 00:15:47.983 } 00:15:47.983 ] 00:15:47.983 }' 00:15:47.983 04:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.983 04:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:48.918 04:58:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:48.918 04:58:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:48.918 [2024-04-27 04:58:18.734835] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.918 04:58:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9d569460-12c0-4877-83ad-ecacc0f69e56 00:15:48.918 04:58:18 -- bdev/bdev_raid.sh@380 -- # '[' -z 9d569460-12c0-4877-83ad-ecacc0f69e56 ']' 00:15:48.918 04:58:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:49.177 [2024-04-27 04:58:18.962630] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.177 [2024-04-27 04:58:18.962672] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.177 [2024-04-27 04:58:18.962814] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.177 [2024-04-27 04:58:18.962884] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.177 [2024-04-27 04:58:18.962898] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:49.177 04:58:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.177 04:58:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:49.435 04:58:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:49.435 04:58:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:49.435 04:58:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.435 04:58:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:49.721 04:58:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.721 04:58:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:49.981 04:58:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:49.981 04:58:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.240 04:58:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:50.240 04:58:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:50.240 04:58:19 -- common/autotest_common.sh@640 -- # local es=0 00:15:50.240 04:58:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:50.240 04:58:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.240 04:58:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.240 04:58:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.240 04:58:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.240 04:58:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.240 04:58:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.240 04:58:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.240 04:58:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:50.240 04:58:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:50.240 [2024-04-27 04:58:20.118919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:50.240 [2024-04-27 04:58:20.121562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:50.240 [2024-04-27 04:58:20.121647] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:50.240 [2024-04-27 04:58:20.121761] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:50.240 [2024-04-27 04:58:20.121838] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.240 [2024-04-27 04:58:20.121851] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:50.240 request: 00:15:50.240 { 00:15:50.240 "name": "raid_bdev1", 00:15:50.240 "raid_level": "raid0", 00:15:50.240 "base_bdevs": [ 00:15:50.240 "malloc1", 00:15:50.240 "malloc2" 00:15:50.240 ], 00:15:50.240 "superblock": false, 00:15:50.240 "strip_size_kb": 64, 00:15:50.240 "method": "bdev_raid_create", 00:15:50.240 "req_id": 1 00:15:50.240 } 00:15:50.240 Got JSON-RPC error response 00:15:50.240 response: 00:15:50.240 { 00:15:50.240 "code": -17, 00:15:50.240 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:50.240 } 00:15:50.499 04:58:20 -- common/autotest_common.sh@643 -- # es=1 00:15:50.499 04:58:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:50.499 04:58:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:50.499 04:58:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:50.499 04:58:20 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.499 04:58:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:50.499 04:58:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:50.499 04:58:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:50.499 04:58:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.758 [2024-04-27 04:58:20.595008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.758 [2024-04-27 04:58:20.595168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.758 [2024-04-27 04:58:20.595217] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:50.758 [2024-04-27 04:58:20.595250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.758 [2024-04-27 04:58:20.598133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.758 [2024-04-27 04:58:20.598198] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.758 [2024-04-27 04:58:20.598304] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:50.758 [2024-04-27 04:58:20.598407] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.758 pt1 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.758 04:58:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.016 04:58:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.016 "name": "raid_bdev1", 00:15:51.016 "uuid": "9d569460-12c0-4877-83ad-ecacc0f69e56", 00:15:51.017 "strip_size_kb": 64, 00:15:51.017 "state": "configuring", 00:15:51.017 "raid_level": "raid0", 00:15:51.017 "superblock": true, 00:15:51.017 "num_base_bdevs": 2, 00:15:51.017 "num_base_bdevs_discovered": 1, 00:15:51.017 "num_base_bdevs_operational": 2, 00:15:51.017 "base_bdevs_list": [ 00:15:51.017 { 00:15:51.017 "name": "pt1", 00:15:51.017 "uuid": "4d47e606-4919-5662-8724-0502699f6a04", 00:15:51.017 "is_configured": true, 00:15:51.017 "data_offset": 2048, 00:15:51.017 "data_size": 63488 00:15:51.017 }, 00:15:51.017 { 00:15:51.017 "name": null, 00:15:51.017 "uuid": "5ae3f168-4398-593f-86dd-810d20b759a4", 00:15:51.017 "is_configured": false, 00:15:51.017 "data_offset": 2048, 00:15:51.017 "data_size": 63488 00:15:51.017 } 00:15:51.017 ] 00:15:51.017 }' 00:15:51.017 04:58:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.017 04:58:20 -- common/autotest_common.sh@10 -- # set +x 00:15:51.952 04:58:21 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:51.952 04:58:21 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:51.952 04:58:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:51.952 04:58:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.952 [2024-04-27 04:58:21.719336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.952 [2024-04-27 04:58:21.719559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.952 [2024-04-27 04:58:21.719607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:51.952 [2024-04-27 04:58:21.719638] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.952 [2024-04-27 04:58:21.720209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.952 [2024-04-27 04:58:21.720270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.952 [2024-04-27 04:58:21.720372] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:51.952 [2024-04-27 04:58:21.720400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.952 [2024-04-27 04:58:21.720543] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:51.952 [2024-04-27 04:58:21.720585] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.953 [2024-04-27 04:58:21.720702] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:51.953 [2024-04-27 04:58:21.721074] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:51.953 [2024-04-27 04:58:21.721099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:51.953 [2024-04-27 04:58:21.721218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.953 pt2 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.953 04:58:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.212 04:58:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.212 "name": "raid_bdev1", 00:15:52.212 "uuid": "9d569460-12c0-4877-83ad-ecacc0f69e56", 00:15:52.212 "strip_size_kb": 64, 00:15:52.212 "state": "online", 00:15:52.212 "raid_level": "raid0", 00:15:52.212 "superblock": true, 00:15:52.212 "num_base_bdevs": 2, 00:15:52.212 "num_base_bdevs_discovered": 2, 00:15:52.212 "num_base_bdevs_operational": 2, 00:15:52.212 "base_bdevs_list": [ 00:15:52.212 { 00:15:52.212 "name": "pt1", 00:15:52.212 "uuid": "4d47e606-4919-5662-8724-0502699f6a04", 00:15:52.212 "is_configured": true, 00:15:52.212 "data_offset": 2048, 00:15:52.212 "data_size": 63488 00:15:52.212 }, 00:15:52.212 { 00:15:52.212 "name": "pt2", 00:15:52.212 "uuid": "5ae3f168-4398-593f-86dd-810d20b759a4", 00:15:52.212 "is_configured": true, 00:15:52.212 "data_offset": 2048, 00:15:52.212 "data_size": 63488 00:15:52.212 } 00:15:52.212 ] 00:15:52.212 }' 00:15:52.212 04:58:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.212 04:58:21 -- common/autotest_common.sh@10 -- # set +x 00:15:52.779 04:58:22 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:52.779 04:58:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:53.038 [2024-04-27 04:58:22.839845] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.038 04:58:22 -- bdev/bdev_raid.sh@430 -- # '[' 9d569460-12c0-4877-83ad-ecacc0f69e56 '!=' 9d569460-12c0-4877-83ad-ecacc0f69e56 ']' 00:15:53.038 04:58:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:53.038 04:58:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:53.038 04:58:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:53.038 04:58:22 -- bdev/bdev_raid.sh@511 -- # killprocess 124965 00:15:53.038 04:58:22 -- common/autotest_common.sh@926 -- # '[' -z 124965 ']' 00:15:53.038 04:58:22 -- common/autotest_common.sh@930 -- # kill -0 124965 00:15:53.038 04:58:22 -- common/autotest_common.sh@931 -- # uname 00:15:53.038 04:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.038 04:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124965 00:15:53.038 04:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.038 04:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.038 04:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124965' 00:15:53.038 killing process with pid 124965 00:15:53.038 04:58:22 -- common/autotest_common.sh@945 -- # kill 124965 00:15:53.038 04:58:22 -- common/autotest_common.sh@950 -- # wait 124965 00:15:53.038 [2024-04-27 04:58:22.891057] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.038 [2024-04-27 04:58:22.891184] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.038 [2024-04-27 04:58:22.891256] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.038 [2024-04-27 04:58:22.891277] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:53.038 [2024-04-27 04:58:22.925983] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:53.606 00:15:53.606 real 0m7.913s 00:15:53.606 user 0m14.034s 00:15:53.606 sys 0m1.156s 00:15:53.606 04:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.606 04:58:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.606 ************************************ 00:15:53.606 END TEST raid_superblock_test 00:15:53.606 ************************************ 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:53.606 04:58:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:53.606 04:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:53.606 04:58:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.606 ************************************ 00:15:53.606 START TEST raid_state_function_test 00:15:53.606 ************************************ 00:15:53.606 04:58:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=125210 00:15:53.606 Process raid pid: 125210 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125210' 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:53.606 04:58:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125210 /var/tmp/spdk-raid.sock 00:15:53.606 04:58:23 -- common/autotest_common.sh@819 -- # '[' -z 125210 ']' 00:15:53.606 04:58:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.606 04:58:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:53.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.606 04:58:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.606 04:58:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:53.606 04:58:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.606 [2024-04-27 04:58:23.430108] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:53.606 [2024-04-27 04:58:23.431035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.866 [2024-04-27 04:58:23.601728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.866 [2024-04-27 04:58:23.705900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.124 [2024-04-27 04:58:23.791033] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.691 04:58:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:54.691 04:58:24 -- common/autotest_common.sh@852 -- # return 0 00:15:54.691 04:58:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:54.948 [2024-04-27 04:58:24.618039] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.948 [2024-04-27 04:58:24.618163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.948 [2024-04-27 04:58:24.618194] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.948 [2024-04-27 04:58:24.618216] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.948 04:58:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:54.948 04:58:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.948 04:58:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.948 04:58:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.948 04:58:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.949 04:58:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.206 04:58:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.206 "name": "Existed_Raid", 00:15:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.206 "strip_size_kb": 64, 00:15:55.206 "state": "configuring", 00:15:55.206 "raid_level": "concat", 00:15:55.206 "superblock": false, 00:15:55.206 "num_base_bdevs": 2, 00:15:55.206 "num_base_bdevs_discovered": 0, 00:15:55.206 "num_base_bdevs_operational": 2, 00:15:55.206 "base_bdevs_list": [ 00:15:55.206 { 00:15:55.206 "name": "BaseBdev1", 00:15:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.206 "is_configured": false, 00:15:55.206 "data_offset": 0, 00:15:55.206 "data_size": 0 00:15:55.206 }, 00:15:55.206 { 00:15:55.206 "name": "BaseBdev2", 00:15:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.206 "is_configured": false, 00:15:55.206 "data_offset": 0, 00:15:55.206 "data_size": 0 00:15:55.206 } 00:15:55.206 ] 00:15:55.206 }' 00:15:55.206 04:58:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.206 04:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:55.772 04:58:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.030 [2024-04-27 04:58:25.802182] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.030 [2024-04-27 04:58:25.802250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:56.030 04:58:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.287 [2024-04-27 04:58:26.078427] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.287 [2024-04-27 04:58:26.078594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.287 [2024-04-27 04:58:26.078621] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.287 [2024-04-27 04:58:26.078670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.287 04:58:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.545 [2024-04-27 04:58:26.347374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.545 BaseBdev1 00:15:56.545 04:58:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:56.545 04:58:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:56.545 04:58:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:56.545 04:58:26 -- common/autotest_common.sh@889 -- # local i 00:15:56.545 04:58:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:56.545 04:58:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:56.545 04:58:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.803 04:58:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.061 [ 00:15:57.061 { 00:15:57.061 "name": "BaseBdev1", 00:15:57.061 "aliases": [ 00:15:57.061 "1ad6c5a8-d544-4450-87b4-53a32f34e5c9" 00:15:57.061 ], 00:15:57.061 "product_name": "Malloc disk", 00:15:57.061 "block_size": 512, 00:15:57.061 "num_blocks": 65536, 00:15:57.061 "uuid": "1ad6c5a8-d544-4450-87b4-53a32f34e5c9", 00:15:57.061 "assigned_rate_limits": { 00:15:57.061 "rw_ios_per_sec": 0, 00:15:57.061 "rw_mbytes_per_sec": 0, 00:15:57.061 "r_mbytes_per_sec": 0, 00:15:57.061 "w_mbytes_per_sec": 0 00:15:57.061 }, 00:15:57.061 "claimed": true, 00:15:57.061 "claim_type": "exclusive_write", 00:15:57.061 "zoned": false, 00:15:57.061 "supported_io_types": { 00:15:57.061 "read": true, 00:15:57.061 "write": true, 00:15:57.061 "unmap": true, 00:15:57.061 "write_zeroes": true, 00:15:57.061 "flush": true, 00:15:57.061 "reset": true, 00:15:57.061 "compare": false, 00:15:57.061 "compare_and_write": false, 00:15:57.061 "abort": true, 00:15:57.061 "nvme_admin": false, 00:15:57.061 "nvme_io": false 00:15:57.061 }, 00:15:57.061 "memory_domains": [ 00:15:57.061 { 00:15:57.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.061 "dma_device_type": 2 00:15:57.061 } 00:15:57.061 ], 00:15:57.061 "driver_specific": {} 00:15:57.061 } 00:15:57.061 ] 00:15:57.061 04:58:26 -- common/autotest_common.sh@895 -- # return 0 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.061 04:58:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.319 04:58:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.319 "name": "Existed_Raid", 00:15:57.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.319 "strip_size_kb": 64, 00:15:57.319 "state": "configuring", 00:15:57.319 "raid_level": "concat", 00:15:57.319 "superblock": false, 00:15:57.319 "num_base_bdevs": 2, 00:15:57.319 "num_base_bdevs_discovered": 1, 00:15:57.319 "num_base_bdevs_operational": 2, 00:15:57.319 "base_bdevs_list": [ 00:15:57.319 { 00:15:57.319 "name": "BaseBdev1", 00:15:57.319 "uuid": "1ad6c5a8-d544-4450-87b4-53a32f34e5c9", 00:15:57.319 "is_configured": true, 00:15:57.319 "data_offset": 0, 00:15:57.319 "data_size": 65536 00:15:57.319 }, 00:15:57.319 { 00:15:57.319 "name": "BaseBdev2", 00:15:57.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.319 "is_configured": false, 00:15:57.319 "data_offset": 0, 00:15:57.319 "data_size": 0 00:15:57.319 } 00:15:57.319 ] 00:15:57.319 }' 00:15:57.319 04:58:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.319 04:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:57.884 04:58:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:58.142 [2024-04-27 04:58:27.975924] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.142 [2024-04-27 04:58:27.976030] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:58.142 04:58:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:58.142 04:58:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:58.400 [2024-04-27 04:58:28.248067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.400 [2024-04-27 04:58:28.250666] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.400 [2024-04-27 04:58:28.250751] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.400 04:58:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.658 04:58:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.658 "name": "Existed_Raid", 00:15:58.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.658 "strip_size_kb": 64, 00:15:58.658 "state": "configuring", 00:15:58.658 "raid_level": "concat", 00:15:58.658 "superblock": false, 00:15:58.658 "num_base_bdevs": 2, 00:15:58.658 "num_base_bdevs_discovered": 1, 00:15:58.658 "num_base_bdevs_operational": 2, 00:15:58.658 "base_bdevs_list": [ 00:15:58.658 { 00:15:58.658 "name": "BaseBdev1", 00:15:58.658 "uuid": "1ad6c5a8-d544-4450-87b4-53a32f34e5c9", 00:15:58.658 "is_configured": true, 00:15:58.658 "data_offset": 0, 00:15:58.658 "data_size": 65536 00:15:58.658 }, 00:15:58.658 { 00:15:58.658 "name": "BaseBdev2", 00:15:58.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.658 "is_configured": false, 00:15:58.658 "data_offset": 0, 00:15:58.658 "data_size": 0 00:15:58.658 } 00:15:58.658 ] 00:15:58.658 }' 00:15:58.658 04:58:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.658 04:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:59.591 04:58:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.591 [2024-04-27 04:58:29.462512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.591 [2024-04-27 04:58:29.462633] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:59.591 [2024-04-27 04:58:29.462652] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:59.591 [2024-04-27 04:58:29.462891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:59.591 [2024-04-27 04:58:29.463574] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:59.591 [2024-04-27 04:58:29.463609] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:59.591 [2024-04-27 04:58:29.464025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.591 BaseBdev2 00:15:59.591 04:58:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:59.591 04:58:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:59.591 04:58:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:59.591 04:58:29 -- common/autotest_common.sh@889 -- # local i 00:15:59.848 04:58:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:59.848 04:58:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:59.848 04:58:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.106 04:58:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:00.364 [ 00:16:00.364 { 00:16:00.364 "name": "BaseBdev2", 00:16:00.364 "aliases": [ 00:16:00.364 "6d4850a4-893e-43fe-aada-1baa27f17afb" 00:16:00.364 ], 00:16:00.364 "product_name": "Malloc disk", 00:16:00.364 "block_size": 512, 00:16:00.364 "num_blocks": 65536, 00:16:00.364 "uuid": "6d4850a4-893e-43fe-aada-1baa27f17afb", 00:16:00.364 "assigned_rate_limits": { 00:16:00.364 "rw_ios_per_sec": 0, 00:16:00.364 "rw_mbytes_per_sec": 0, 00:16:00.364 "r_mbytes_per_sec": 0, 00:16:00.364 "w_mbytes_per_sec": 0 00:16:00.364 }, 00:16:00.364 "claimed": true, 00:16:00.364 "claim_type": "exclusive_write", 00:16:00.364 "zoned": false, 00:16:00.364 "supported_io_types": { 00:16:00.364 "read": true, 00:16:00.364 "write": true, 00:16:00.364 "unmap": true, 00:16:00.364 "write_zeroes": true, 00:16:00.364 "flush": true, 00:16:00.364 "reset": true, 00:16:00.364 "compare": false, 00:16:00.364 "compare_and_write": false, 00:16:00.364 "abort": true, 00:16:00.364 "nvme_admin": false, 00:16:00.364 "nvme_io": false 00:16:00.364 }, 00:16:00.364 "memory_domains": [ 00:16:00.364 { 00:16:00.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.364 "dma_device_type": 2 00:16:00.364 } 00:16:00.364 ], 00:16:00.364 "driver_specific": {} 00:16:00.364 } 00:16:00.364 ] 00:16:00.364 04:58:30 -- common/autotest_common.sh@895 -- # return 0 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.364 04:58:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.622 04:58:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.622 "name": "Existed_Raid", 00:16:00.622 "uuid": "bfdc48fd-c883-4c58-8d07-65b68d4a8157", 00:16:00.622 "strip_size_kb": 64, 00:16:00.622 "state": "online", 00:16:00.622 "raid_level": "concat", 00:16:00.622 "superblock": false, 00:16:00.622 "num_base_bdevs": 2, 00:16:00.622 "num_base_bdevs_discovered": 2, 00:16:00.622 "num_base_bdevs_operational": 2, 00:16:00.622 "base_bdevs_list": [ 00:16:00.622 { 00:16:00.622 "name": "BaseBdev1", 00:16:00.622 "uuid": "1ad6c5a8-d544-4450-87b4-53a32f34e5c9", 00:16:00.622 "is_configured": true, 00:16:00.622 "data_offset": 0, 00:16:00.622 "data_size": 65536 00:16:00.622 }, 00:16:00.622 { 00:16:00.622 "name": "BaseBdev2", 00:16:00.622 "uuid": "6d4850a4-893e-43fe-aada-1baa27f17afb", 00:16:00.622 "is_configured": true, 00:16:00.622 "data_offset": 0, 00:16:00.622 "data_size": 65536 00:16:00.622 } 00:16:00.622 ] 00:16:00.622 }' 00:16:00.622 04:58:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.622 04:58:30 -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 04:58:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:01.445 [2024-04-27 04:58:31.251256] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.445 [2024-04-27 04:58:31.251317] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.445 [2024-04-27 04:58:31.251442] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.445 04:58:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.703 04:58:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.703 "name": "Existed_Raid", 00:16:01.703 "uuid": "bfdc48fd-c883-4c58-8d07-65b68d4a8157", 00:16:01.703 "strip_size_kb": 64, 00:16:01.703 "state": "offline", 00:16:01.703 "raid_level": "concat", 00:16:01.703 "superblock": false, 00:16:01.703 "num_base_bdevs": 2, 00:16:01.703 "num_base_bdevs_discovered": 1, 00:16:01.703 "num_base_bdevs_operational": 1, 00:16:01.703 "base_bdevs_list": [ 00:16:01.703 { 00:16:01.703 "name": null, 00:16:01.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.703 "is_configured": false, 00:16:01.703 "data_offset": 0, 00:16:01.703 "data_size": 65536 00:16:01.703 }, 00:16:01.703 { 00:16:01.703 "name": "BaseBdev2", 00:16:01.703 "uuid": "6d4850a4-893e-43fe-aada-1baa27f17afb", 00:16:01.703 "is_configured": true, 00:16:01.703 "data_offset": 0, 00:16:01.703 "data_size": 65536 00:16:01.703 } 00:16:01.703 ] 00:16:01.703 }' 00:16:01.703 04:58:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.703 04:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:02.633 04:58:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:02.633 04:58:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:02.633 04:58:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.633 04:58:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:02.890 04:58:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:02.890 04:58:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:02.890 04:58:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:03.147 [2024-04-27 04:58:32.796532] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:03.147 [2024-04-27 04:58:32.796695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:03.147 04:58:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:03.147 04:58:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:03.147 04:58:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.147 04:58:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:03.404 04:58:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:03.404 04:58:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:03.404 04:58:33 -- bdev/bdev_raid.sh@287 -- # killprocess 125210 00:16:03.404 04:58:33 -- common/autotest_common.sh@926 -- # '[' -z 125210 ']' 00:16:03.404 04:58:33 -- common/autotest_common.sh@930 -- # kill -0 125210 00:16:03.404 04:58:33 -- common/autotest_common.sh@931 -- # uname 00:16:03.404 04:58:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:03.404 04:58:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125210 00:16:03.404 killing process with pid 125210 00:16:03.404 04:58:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:03.404 04:58:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:03.404 04:58:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125210' 00:16:03.404 04:58:33 -- common/autotest_common.sh@945 -- # kill 125210 00:16:03.404 04:58:33 -- common/autotest_common.sh@950 -- # wait 125210 00:16:03.404 [2024-04-27 04:58:33.148208] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.404 [2024-04-27 04:58:33.148329] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.663 ************************************ 00:16:03.663 END TEST raid_state_function_test 00:16:03.664 ************************************ 00:16:03.664 04:58:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:03.664 00:16:03.664 real 0m10.192s 00:16:03.664 user 0m18.254s 00:16:03.664 sys 0m1.455s 00:16:03.664 04:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.664 04:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:03.920 04:58:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:03.920 04:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.920 04:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.920 ************************************ 00:16:03.920 START TEST raid_state_function_test_sb 00:16:03.920 ************************************ 00:16:03.920 04:58:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.920 04:58:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=125531 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125531' 00:16:03.921 Process raid pid: 125531 00:16:03.921 04:58:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125531 /var/tmp/spdk-raid.sock 00:16:03.921 04:58:33 -- common/autotest_common.sh@819 -- # '[' -z 125531 ']' 00:16:03.921 04:58:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:03.921 04:58:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:03.921 04:58:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:03.921 04:58:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.921 04:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.921 [2024-04-27 04:58:33.675565] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:03.921 [2024-04-27 04:58:33.675839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.178 [2024-04-27 04:58:33.849649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.178 [2024-04-27 04:58:33.974763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.178 [2024-04-27 04:58:34.065808] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.807 04:58:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.807 04:58:34 -- common/autotest_common.sh@852 -- # return 0 00:16:04.807 04:58:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:05.065 [2024-04-27 04:58:34.865277] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.065 [2024-04-27 04:58:34.865389] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.065 [2024-04-27 04:58:34.865405] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.065 [2024-04-27 04:58:34.865428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.065 04:58:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.323 04:58:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.323 "name": "Existed_Raid", 00:16:05.323 "uuid": "8435a4ad-9c06-484a-a6aa-076ff735deef", 00:16:05.323 "strip_size_kb": 64, 00:16:05.323 "state": "configuring", 00:16:05.323 "raid_level": "concat", 00:16:05.323 "superblock": true, 00:16:05.323 "num_base_bdevs": 2, 00:16:05.323 "num_base_bdevs_discovered": 0, 00:16:05.323 "num_base_bdevs_operational": 2, 00:16:05.323 "base_bdevs_list": [ 00:16:05.323 { 00:16:05.323 "name": "BaseBdev1", 00:16:05.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.323 "is_configured": false, 00:16:05.323 "data_offset": 0, 00:16:05.323 "data_size": 0 00:16:05.323 }, 00:16:05.323 { 00:16:05.323 "name": "BaseBdev2", 00:16:05.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.323 "is_configured": false, 00:16:05.323 "data_offset": 0, 00:16:05.323 "data_size": 0 00:16:05.323 } 00:16:05.323 ] 00:16:05.323 }' 00:16:05.323 04:58:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.323 04:58:35 -- common/autotest_common.sh@10 -- # set +x 00:16:05.888 04:58:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:06.144 [2024-04-27 04:58:35.981389] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.144 [2024-04-27 04:58:35.981457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:06.144 04:58:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:06.401 [2024-04-27 04:58:36.241512] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.401 [2024-04-27 04:58:36.241652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.401 [2024-04-27 04:58:36.241683] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.401 [2024-04-27 04:58:36.241712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.401 04:58:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.659 [2024-04-27 04:58:36.491167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.659 BaseBdev1 00:16:06.659 04:58:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:06.659 04:58:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:06.659 04:58:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.659 04:58:36 -- common/autotest_common.sh@889 -- # local i 00:16:06.659 04:58:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.659 04:58:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.659 04:58:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.917 04:58:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:07.175 [ 00:16:07.175 { 00:16:07.175 "name": "BaseBdev1", 00:16:07.175 "aliases": [ 00:16:07.175 "a3bdef3d-944c-4797-9dd2-a90f327b5771" 00:16:07.175 ], 00:16:07.175 "product_name": "Malloc disk", 00:16:07.175 "block_size": 512, 00:16:07.175 "num_blocks": 65536, 00:16:07.175 "uuid": "a3bdef3d-944c-4797-9dd2-a90f327b5771", 00:16:07.175 "assigned_rate_limits": { 00:16:07.175 "rw_ios_per_sec": 0, 00:16:07.175 "rw_mbytes_per_sec": 0, 00:16:07.175 "r_mbytes_per_sec": 0, 00:16:07.175 "w_mbytes_per_sec": 0 00:16:07.175 }, 00:16:07.175 "claimed": true, 00:16:07.175 "claim_type": "exclusive_write", 00:16:07.175 "zoned": false, 00:16:07.175 "supported_io_types": { 00:16:07.175 "read": true, 00:16:07.175 "write": true, 00:16:07.175 "unmap": true, 00:16:07.175 "write_zeroes": true, 00:16:07.175 "flush": true, 00:16:07.175 "reset": true, 00:16:07.175 "compare": false, 00:16:07.175 "compare_and_write": false, 00:16:07.175 "abort": true, 00:16:07.175 "nvme_admin": false, 00:16:07.175 "nvme_io": false 00:16:07.175 }, 00:16:07.175 "memory_domains": [ 00:16:07.175 { 00:16:07.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.175 "dma_device_type": 2 00:16:07.175 } 00:16:07.175 ], 00:16:07.175 "driver_specific": {} 00:16:07.175 } 00:16:07.175 ] 00:16:07.175 04:58:36 -- common/autotest_common.sh@895 -- # return 0 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.175 04:58:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.432 04:58:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.432 "name": "Existed_Raid", 00:16:07.432 "uuid": "811bfefa-ba56-4852-a80e-d10ef1895515", 00:16:07.432 "strip_size_kb": 64, 00:16:07.432 "state": "configuring", 00:16:07.432 "raid_level": "concat", 00:16:07.432 "superblock": true, 00:16:07.432 "num_base_bdevs": 2, 00:16:07.432 "num_base_bdevs_discovered": 1, 00:16:07.432 "num_base_bdevs_operational": 2, 00:16:07.432 "base_bdevs_list": [ 00:16:07.432 { 00:16:07.432 "name": "BaseBdev1", 00:16:07.432 "uuid": "a3bdef3d-944c-4797-9dd2-a90f327b5771", 00:16:07.432 "is_configured": true, 00:16:07.432 "data_offset": 2048, 00:16:07.432 "data_size": 63488 00:16:07.432 }, 00:16:07.432 { 00:16:07.432 "name": "BaseBdev2", 00:16:07.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.432 "is_configured": false, 00:16:07.432 "data_offset": 0, 00:16:07.432 "data_size": 0 00:16:07.432 } 00:16:07.432 ] 00:16:07.432 }' 00:16:07.432 04:58:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.432 04:58:37 -- common/autotest_common.sh@10 -- # set +x 00:16:08.364 04:58:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.364 [2024-04-27 04:58:38.159690] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.364 [2024-04-27 04:58:38.159796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:08.364 04:58:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:08.364 04:58:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.621 04:58:38 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.879 BaseBdev1 00:16:08.879 04:58:38 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:08.879 04:58:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:08.879 04:58:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.879 04:58:38 -- common/autotest_common.sh@889 -- # local i 00:16:08.879 04:58:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.879 04:58:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.879 04:58:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.138 04:58:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.396 [ 00:16:09.396 { 00:16:09.396 "name": "BaseBdev1", 00:16:09.396 "aliases": [ 00:16:09.396 "c6985137-a73c-4a28-aed4-aa372a28150a" 00:16:09.396 ], 00:16:09.396 "product_name": "Malloc disk", 00:16:09.396 "block_size": 512, 00:16:09.396 "num_blocks": 65536, 00:16:09.396 "uuid": "c6985137-a73c-4a28-aed4-aa372a28150a", 00:16:09.396 "assigned_rate_limits": { 00:16:09.396 "rw_ios_per_sec": 0, 00:16:09.396 "rw_mbytes_per_sec": 0, 00:16:09.396 "r_mbytes_per_sec": 0, 00:16:09.396 "w_mbytes_per_sec": 0 00:16:09.396 }, 00:16:09.396 "claimed": false, 00:16:09.396 "zoned": false, 00:16:09.396 "supported_io_types": { 00:16:09.396 "read": true, 00:16:09.396 "write": true, 00:16:09.396 "unmap": true, 00:16:09.396 "write_zeroes": true, 00:16:09.396 "flush": true, 00:16:09.396 "reset": true, 00:16:09.396 "compare": false, 00:16:09.396 "compare_and_write": false, 00:16:09.396 "abort": true, 00:16:09.396 "nvme_admin": false, 00:16:09.396 "nvme_io": false 00:16:09.396 }, 00:16:09.396 "memory_domains": [ 00:16:09.396 { 00:16:09.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.396 "dma_device_type": 2 00:16:09.396 } 00:16:09.396 ], 00:16:09.396 "driver_specific": {} 00:16:09.396 } 00:16:09.396 ] 00:16:09.396 04:58:39 -- common/autotest_common.sh@895 -- # return 0 00:16:09.396 04:58:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:09.655 [2024-04-27 04:58:39.499655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.655 [2024-04-27 04:58:39.502251] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.655 [2024-04-27 04:58:39.502325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.655 04:58:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.914 04:58:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.914 "name": "Existed_Raid", 00:16:09.914 "uuid": "015cee61-27b7-40b8-9f0d-311c3528b63c", 00:16:09.914 "strip_size_kb": 64, 00:16:09.914 "state": "configuring", 00:16:09.914 "raid_level": "concat", 00:16:09.914 "superblock": true, 00:16:09.914 "num_base_bdevs": 2, 00:16:09.914 "num_base_bdevs_discovered": 1, 00:16:09.914 "num_base_bdevs_operational": 2, 00:16:09.914 "base_bdevs_list": [ 00:16:09.914 { 00:16:09.914 "name": "BaseBdev1", 00:16:09.914 "uuid": "c6985137-a73c-4a28-aed4-aa372a28150a", 00:16:09.914 "is_configured": true, 00:16:09.914 "data_offset": 2048, 00:16:09.914 "data_size": 63488 00:16:09.914 }, 00:16:09.914 { 00:16:09.914 "name": "BaseBdev2", 00:16:09.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.914 "is_configured": false, 00:16:09.914 "data_offset": 0, 00:16:09.914 "data_size": 0 00:16:09.914 } 00:16:09.914 ] 00:16:09.914 }' 00:16:09.914 04:58:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.914 04:58:39 -- common/autotest_common.sh@10 -- # set +x 00:16:10.846 04:58:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.846 [2024-04-27 04:58:40.710826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.846 [2024-04-27 04:58:40.711193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:10.846 [2024-04-27 04:58:40.711223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:10.846 [2024-04-27 04:58:40.711459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:10.846 [2024-04-27 04:58:40.712175] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:10.846 [2024-04-27 04:58:40.712210] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:10.846 [2024-04-27 04:58:40.712484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.846 BaseBdev2 00:16:10.846 04:58:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:10.846 04:58:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:10.846 04:58:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:10.846 04:58:40 -- common/autotest_common.sh@889 -- # local i 00:16:10.846 04:58:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:10.846 04:58:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:10.846 04:58:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.412 04:58:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.412 [ 00:16:11.412 { 00:16:11.412 "name": "BaseBdev2", 00:16:11.412 "aliases": [ 00:16:11.412 "689dcd3e-8634-4d59-9181-097667b1c644" 00:16:11.412 ], 00:16:11.412 "product_name": "Malloc disk", 00:16:11.412 "block_size": 512, 00:16:11.412 "num_blocks": 65536, 00:16:11.412 "uuid": "689dcd3e-8634-4d59-9181-097667b1c644", 00:16:11.412 "assigned_rate_limits": { 00:16:11.412 "rw_ios_per_sec": 0, 00:16:11.412 "rw_mbytes_per_sec": 0, 00:16:11.412 "r_mbytes_per_sec": 0, 00:16:11.412 "w_mbytes_per_sec": 0 00:16:11.412 }, 00:16:11.412 "claimed": true, 00:16:11.412 "claim_type": "exclusive_write", 00:16:11.412 "zoned": false, 00:16:11.412 "supported_io_types": { 00:16:11.412 "read": true, 00:16:11.412 "write": true, 00:16:11.412 "unmap": true, 00:16:11.412 "write_zeroes": true, 00:16:11.412 "flush": true, 00:16:11.412 "reset": true, 00:16:11.412 "compare": false, 00:16:11.412 "compare_and_write": false, 00:16:11.412 "abort": true, 00:16:11.412 "nvme_admin": false, 00:16:11.412 "nvme_io": false 00:16:11.412 }, 00:16:11.412 "memory_domains": [ 00:16:11.412 { 00:16:11.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.412 "dma_device_type": 2 00:16:11.412 } 00:16:11.412 ], 00:16:11.412 "driver_specific": {} 00:16:11.412 } 00:16:11.412 ] 00:16:11.412 04:58:41 -- common/autotest_common.sh@895 -- # return 0 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.412 04:58:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.670 04:58:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.670 "name": "Existed_Raid", 00:16:11.670 "uuid": "015cee61-27b7-40b8-9f0d-311c3528b63c", 00:16:11.670 "strip_size_kb": 64, 00:16:11.670 "state": "online", 00:16:11.670 "raid_level": "concat", 00:16:11.670 "superblock": true, 00:16:11.670 "num_base_bdevs": 2, 00:16:11.671 "num_base_bdevs_discovered": 2, 00:16:11.671 "num_base_bdevs_operational": 2, 00:16:11.671 "base_bdevs_list": [ 00:16:11.671 { 00:16:11.671 "name": "BaseBdev1", 00:16:11.671 "uuid": "c6985137-a73c-4a28-aed4-aa372a28150a", 00:16:11.671 "is_configured": true, 00:16:11.671 "data_offset": 2048, 00:16:11.671 "data_size": 63488 00:16:11.671 }, 00:16:11.671 { 00:16:11.671 "name": "BaseBdev2", 00:16:11.671 "uuid": "689dcd3e-8634-4d59-9181-097667b1c644", 00:16:11.671 "is_configured": true, 00:16:11.671 "data_offset": 2048, 00:16:11.671 "data_size": 63488 00:16:11.671 } 00:16:11.671 ] 00:16:11.671 }' 00:16:11.671 04:58:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.671 04:58:41 -- common/autotest_common.sh@10 -- # set +x 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:12.605 [2024-04-27 04:58:42.435526] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.605 [2024-04-27 04:58:42.435580] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.605 [2024-04-27 04:58:42.435689] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.605 04:58:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.178 04:58:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.178 "name": "Existed_Raid", 00:16:13.178 "uuid": "015cee61-27b7-40b8-9f0d-311c3528b63c", 00:16:13.178 "strip_size_kb": 64, 00:16:13.178 "state": "offline", 00:16:13.178 "raid_level": "concat", 00:16:13.178 "superblock": true, 00:16:13.178 "num_base_bdevs": 2, 00:16:13.178 "num_base_bdevs_discovered": 1, 00:16:13.178 "num_base_bdevs_operational": 1, 00:16:13.178 "base_bdevs_list": [ 00:16:13.178 { 00:16:13.178 "name": null, 00:16:13.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.178 "is_configured": false, 00:16:13.178 "data_offset": 2048, 00:16:13.178 "data_size": 63488 00:16:13.178 }, 00:16:13.178 { 00:16:13.178 "name": "BaseBdev2", 00:16:13.178 "uuid": "689dcd3e-8634-4d59-9181-097667b1c644", 00:16:13.178 "is_configured": true, 00:16:13.178 "data_offset": 2048, 00:16:13.178 "data_size": 63488 00:16:13.178 } 00:16:13.178 ] 00:16:13.178 }' 00:16:13.178 04:58:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.178 04:58:42 -- common/autotest_common.sh@10 -- # set +x 00:16:13.749 04:58:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:13.749 04:58:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:13.749 04:58:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.749 04:58:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:14.006 04:58:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:14.006 04:58:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.006 04:58:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:14.265 [2024-04-27 04:58:44.033588] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.265 [2024-04-27 04:58:44.033707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:14.265 04:58:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:14.265 04:58:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.265 04:58:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.265 04:58:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.524 04:58:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:14.524 04:58:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:14.524 04:58:44 -- bdev/bdev_raid.sh@287 -- # killprocess 125531 00:16:14.524 04:58:44 -- common/autotest_common.sh@926 -- # '[' -z 125531 ']' 00:16:14.524 04:58:44 -- common/autotest_common.sh@930 -- # kill -0 125531 00:16:14.524 04:58:44 -- common/autotest_common.sh@931 -- # uname 00:16:14.524 04:58:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:14.524 04:58:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125531 00:16:14.524 killing process with pid 125531 00:16:14.524 04:58:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:14.524 04:58:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:14.524 04:58:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125531' 00:16:14.524 04:58:44 -- common/autotest_common.sh@945 -- # kill 125531 00:16:14.524 04:58:44 -- common/autotest_common.sh@950 -- # wait 125531 00:16:14.524 [2024-04-27 04:58:44.383765] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.524 [2024-04-27 04:58:44.383901] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.093 ************************************ 00:16:15.093 END TEST raid_state_function_test_sb 00:16:15.093 ************************************ 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:15.093 00:16:15.093 real 0m11.144s 00:16:15.093 user 0m20.115s 00:16:15.093 sys 0m1.500s 00:16:15.093 04:58:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.093 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:15.093 04:58:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:15.093 04:58:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.093 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:16:15.093 ************************************ 00:16:15.093 START TEST raid_superblock_test 00:16:15.093 ************************************ 00:16:15.093 04:58:44 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=125867 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125867 /var/tmp/spdk-raid.sock 00:16:15.093 04:58:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:15.093 04:58:44 -- common/autotest_common.sh@819 -- # '[' -z 125867 ']' 00:16:15.093 04:58:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.093 04:58:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.093 04:58:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.093 04:58:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.093 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:16:15.093 [2024-04-27 04:58:44.878184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:15.093 [2024-04-27 04:58:44.878494] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125867 ] 00:16:15.351 [2024-04-27 04:58:45.052712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.351 [2024-04-27 04:58:45.181755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.609 [2024-04-27 04:58:45.265998] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.175 04:58:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.175 04:58:45 -- common/autotest_common.sh@852 -- # return 0 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.175 04:58:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:16.175 malloc1 00:16:16.433 04:58:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.434 [2024-04-27 04:58:46.298691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.434 [2024-04-27 04:58:46.298835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.434 [2024-04-27 04:58:46.298888] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:16.434 [2024-04-27 04:58:46.298965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.434 [2024-04-27 04:58:46.302017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.434 [2024-04-27 04:58:46.302086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.434 pt1 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.434 04:58:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:17.000 malloc2 00:16:17.000 04:58:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.000 [2024-04-27 04:58:46.830980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.000 [2024-04-27 04:58:46.831128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.000 [2024-04-27 04:58:46.831182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:17.000 [2024-04-27 04:58:46.831257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.000 [2024-04-27 04:58:46.834198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.000 [2024-04-27 04:58:46.834266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.000 pt2 00:16:17.000 04:58:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:17.000 04:58:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:17.000 04:58:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:17.257 [2024-04-27 04:58:47.059319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.257 [2024-04-27 04:58:47.061892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.257 [2024-04-27 04:58:47.062189] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:16:17.257 [2024-04-27 04:58:47.062239] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:17.257 [2024-04-27 04:58:47.062424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:17.257 [2024-04-27 04:58:47.062975] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:16:17.257 [2024-04-27 04:58:47.062998] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:16:17.257 [2024-04-27 04:58:47.063260] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.257 04:58:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.515 04:58:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.515 "name": "raid_bdev1", 00:16:17.515 "uuid": "04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3", 00:16:17.515 "strip_size_kb": 64, 00:16:17.515 "state": "online", 00:16:17.515 "raid_level": "concat", 00:16:17.515 "superblock": true, 00:16:17.515 "num_base_bdevs": 2, 00:16:17.515 "num_base_bdevs_discovered": 2, 00:16:17.515 "num_base_bdevs_operational": 2, 00:16:17.515 "base_bdevs_list": [ 00:16:17.515 { 00:16:17.515 "name": "pt1", 00:16:17.515 "uuid": "adce0988-72ed-5d5b-8e2a-37a4091492a4", 00:16:17.515 "is_configured": true, 00:16:17.515 "data_offset": 2048, 00:16:17.515 "data_size": 63488 00:16:17.515 }, 00:16:17.515 { 00:16:17.515 "name": "pt2", 00:16:17.515 "uuid": "e6ebb993-c821-5d47-a33c-d572cde7cced", 00:16:17.515 "is_configured": true, 00:16:17.515 "data_offset": 2048, 00:16:17.515 "data_size": 63488 00:16:17.515 } 00:16:17.515 ] 00:16:17.515 }' 00:16:17.515 04:58:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.515 04:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:18.446 04:58:48 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:18.446 04:58:48 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:18.446 [2024-04-27 04:58:48.219805] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.446 04:58:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3 00:16:18.446 04:58:48 -- bdev/bdev_raid.sh@380 -- # '[' -z 04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3 ']' 00:16:18.446 04:58:48 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:18.703 [2024-04-27 04:58:48.451535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.703 [2024-04-27 04:58:48.451585] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.703 [2024-04-27 04:58:48.451763] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.703 [2024-04-27 04:58:48.451843] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.703 [2024-04-27 04:58:48.451857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:16:18.703 04:58:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:18.703 04:58:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.961 04:58:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:18.961 04:58:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:18.961 04:58:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.961 04:58:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:19.219 04:58:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.219 04:58:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:19.476 04:58:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:19.476 04:58:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:19.733 04:58:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:19.733 04:58:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:19.733 04:58:49 -- common/autotest_common.sh@640 -- # local es=0 00:16:19.733 04:58:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:19.733 04:58:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.733 04:58:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:19.733 04:58:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.733 04:58:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:19.733 04:58:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.733 04:58:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:19.733 04:58:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.733 04:58:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:19.733 04:58:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:19.991 [2024-04-27 04:58:49.691855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:19.991 [2024-04-27 04:58:49.694317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:19.991 [2024-04-27 04:58:49.694405] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:19.991 [2024-04-27 04:58:49.694517] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:19.991 [2024-04-27 04:58:49.694564] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.991 [2024-04-27 04:58:49.694577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:16:19.991 request: 00:16:19.991 { 00:16:19.991 "name": "raid_bdev1", 00:16:19.991 "raid_level": "concat", 00:16:19.991 "base_bdevs": [ 00:16:19.991 "malloc1", 00:16:19.991 "malloc2" 00:16:19.991 ], 00:16:19.991 "superblock": false, 00:16:19.991 "strip_size_kb": 64, 00:16:19.991 "method": "bdev_raid_create", 00:16:19.991 "req_id": 1 00:16:19.991 } 00:16:19.991 Got JSON-RPC error response 00:16:19.991 response: 00:16:19.991 { 00:16:19.991 "code": -17, 00:16:19.991 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:19.991 } 00:16:19.991 04:58:49 -- common/autotest_common.sh@643 -- # es=1 00:16:19.991 04:58:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:19.991 04:58:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:19.991 04:58:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:19.991 04:58:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.991 04:58:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:20.248 04:58:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:20.248 04:58:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:20.248 04:58:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.524 [2024-04-27 04:58:50.183904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.524 [2024-04-27 04:58:50.184063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.524 [2024-04-27 04:58:50.184112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:20.524 [2024-04-27 04:58:50.184145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.524 [2024-04-27 04:58:50.186984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.524 [2024-04-27 04:58:50.187059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.524 [2024-04-27 04:58:50.187170] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:20.524 [2024-04-27 04:58:50.187259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.524 pt1 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.524 04:58:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.808 04:58:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.808 "name": "raid_bdev1", 00:16:20.808 "uuid": "04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3", 00:16:20.808 "strip_size_kb": 64, 00:16:20.808 "state": "configuring", 00:16:20.808 "raid_level": "concat", 00:16:20.808 "superblock": true, 00:16:20.808 "num_base_bdevs": 2, 00:16:20.808 "num_base_bdevs_discovered": 1, 00:16:20.808 "num_base_bdevs_operational": 2, 00:16:20.808 "base_bdevs_list": [ 00:16:20.808 { 00:16:20.808 "name": "pt1", 00:16:20.808 "uuid": "adce0988-72ed-5d5b-8e2a-37a4091492a4", 00:16:20.808 "is_configured": true, 00:16:20.808 "data_offset": 2048, 00:16:20.808 "data_size": 63488 00:16:20.808 }, 00:16:20.808 { 00:16:20.808 "name": null, 00:16:20.808 "uuid": "e6ebb993-c821-5d47-a33c-d572cde7cced", 00:16:20.808 "is_configured": false, 00:16:20.808 "data_offset": 2048, 00:16:20.808 "data_size": 63488 00:16:20.808 } 00:16:20.808 ] 00:16:20.808 }' 00:16:20.808 04:58:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.808 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.373 04:58:51 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:21.373 04:58:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:21.373 04:58:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:21.373 04:58:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.631 [2024-04-27 04:58:51.376217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.631 [2024-04-27 04:58:51.376380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.631 [2024-04-27 04:58:51.376430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:21.631 [2024-04-27 04:58:51.376462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.631 [2024-04-27 04:58:51.377034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.631 [2024-04-27 04:58:51.377092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.631 [2024-04-27 04:58:51.377202] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:21.631 [2024-04-27 04:58:51.377230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.631 [2024-04-27 04:58:51.377374] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:21.631 [2024-04-27 04:58:51.377390] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:21.631 [2024-04-27 04:58:51.377504] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:21.631 [2024-04-27 04:58:51.377874] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:21.631 [2024-04-27 04:58:51.377899] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:21.631 [2024-04-27 04:58:51.378020] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.631 pt2 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.631 04:58:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.889 04:58:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.889 "name": "raid_bdev1", 00:16:21.889 "uuid": "04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3", 00:16:21.889 "strip_size_kb": 64, 00:16:21.889 "state": "online", 00:16:21.889 "raid_level": "concat", 00:16:21.889 "superblock": true, 00:16:21.889 "num_base_bdevs": 2, 00:16:21.889 "num_base_bdevs_discovered": 2, 00:16:21.889 "num_base_bdevs_operational": 2, 00:16:21.889 "base_bdevs_list": [ 00:16:21.889 { 00:16:21.889 "name": "pt1", 00:16:21.889 "uuid": "adce0988-72ed-5d5b-8e2a-37a4091492a4", 00:16:21.889 "is_configured": true, 00:16:21.889 "data_offset": 2048, 00:16:21.889 "data_size": 63488 00:16:21.889 }, 00:16:21.889 { 00:16:21.889 "name": "pt2", 00:16:21.889 "uuid": "e6ebb993-c821-5d47-a33c-d572cde7cced", 00:16:21.889 "is_configured": true, 00:16:21.889 "data_offset": 2048, 00:16:21.889 "data_size": 63488 00:16:21.889 } 00:16:21.889 ] 00:16:21.889 }' 00:16:21.889 04:58:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.889 04:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.820 [2024-04-27 04:58:52.637508] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@430 -- # '[' 04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3 '!=' 04dca88f-7fa1-4bd2-b1ec-a8472f3d9fa3 ']' 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:22.820 04:58:52 -- bdev/bdev_raid.sh@511 -- # killprocess 125867 00:16:22.820 04:58:52 -- common/autotest_common.sh@926 -- # '[' -z 125867 ']' 00:16:22.820 04:58:52 -- common/autotest_common.sh@930 -- # kill -0 125867 00:16:22.820 04:58:52 -- common/autotest_common.sh@931 -- # uname 00:16:22.820 04:58:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.820 04:58:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125867 00:16:22.820 killing process with pid 125867 00:16:22.820 04:58:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:22.820 04:58:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:22.820 04:58:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125867' 00:16:22.820 04:58:52 -- common/autotest_common.sh@945 -- # kill 125867 00:16:22.820 04:58:52 -- common/autotest_common.sh@950 -- # wait 125867 00:16:22.820 [2024-04-27 04:58:52.680522] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.820 [2024-04-27 04:58:52.680659] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.820 [2024-04-27 04:58:52.680729] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.820 [2024-04-27 04:58:52.680742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:23.078 [2024-04-27 04:58:52.723627] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:23.336 00:16:23.336 real 0m8.274s 00:16:23.336 user 0m14.702s 00:16:23.336 sys 0m1.228s 00:16:23.336 04:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.336 ************************************ 00:16:23.336 END TEST raid_superblock_test 00:16:23.336 ************************************ 00:16:23.336 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:23.336 04:58:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:23.336 04:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:23.336 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.336 ************************************ 00:16:23.336 START TEST raid_state_function_test 00:16:23.336 ************************************ 00:16:23.336 04:58:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=126118 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:23.336 Process raid pid: 126118 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126118' 00:16:23.336 04:58:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126118 /var/tmp/spdk-raid.sock 00:16:23.336 04:58:53 -- common/autotest_common.sh@819 -- # '[' -z 126118 ']' 00:16:23.336 04:58:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:23.336 04:58:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:23.336 04:58:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:23.336 04:58:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.336 04:58:53 -- common/autotest_common.sh@10 -- # set +x 00:16:23.336 [2024-04-27 04:58:53.201293] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:23.336 [2024-04-27 04:58:53.201532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.594 [2024-04-27 04:58:53.363511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.851 [2024-04-27 04:58:53.491175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.851 [2024-04-27 04:58:53.571236] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.414 04:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:24.414 04:58:54 -- common/autotest_common.sh@852 -- # return 0 00:16:24.414 04:58:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:24.670 [2024-04-27 04:58:54.374984] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.670 [2024-04-27 04:58:54.375317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.670 [2024-04-27 04:58:54.375445] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.670 [2024-04-27 04:58:54.375513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.670 04:58:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.927 04:58:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.927 "name": "Existed_Raid", 00:16:24.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.927 "strip_size_kb": 0, 00:16:24.927 "state": "configuring", 00:16:24.927 "raid_level": "raid1", 00:16:24.927 "superblock": false, 00:16:24.927 "num_base_bdevs": 2, 00:16:24.927 "num_base_bdevs_discovered": 0, 00:16:24.927 "num_base_bdevs_operational": 2, 00:16:24.927 "base_bdevs_list": [ 00:16:24.927 { 00:16:24.927 "name": "BaseBdev1", 00:16:24.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.927 "is_configured": false, 00:16:24.927 "data_offset": 0, 00:16:24.927 "data_size": 0 00:16:24.927 }, 00:16:24.927 { 00:16:24.927 "name": "BaseBdev2", 00:16:24.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.927 "is_configured": false, 00:16:24.927 "data_offset": 0, 00:16:24.927 "data_size": 0 00:16:24.927 } 00:16:24.927 ] 00:16:24.927 }' 00:16:24.927 04:58:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.927 04:58:54 -- common/autotest_common.sh@10 -- # set +x 00:16:25.491 04:58:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.056 [2024-04-27 04:58:55.663099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.056 [2024-04-27 04:58:55.663390] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:26.056 04:58:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:26.056 [2024-04-27 04:58:55.951202] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.056 [2024-04-27 04:58:55.951575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.056 [2024-04-27 04:58:55.951705] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.056 [2024-04-27 04:58:55.951776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.314 04:58:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.314 [2024-04-27 04:58:56.198936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.314 BaseBdev1 00:16:26.572 04:58:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:26.572 04:58:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:26.572 04:58:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:26.572 04:58:56 -- common/autotest_common.sh@889 -- # local i 00:16:26.572 04:58:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:26.572 04:58:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:26.572 04:58:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.572 04:58:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.830 [ 00:16:26.830 { 00:16:26.830 "name": "BaseBdev1", 00:16:26.830 "aliases": [ 00:16:26.830 "41aa0cc3-3632-4eb3-93aa-c540c91aac4c" 00:16:26.830 ], 00:16:26.830 "product_name": "Malloc disk", 00:16:26.830 "block_size": 512, 00:16:26.830 "num_blocks": 65536, 00:16:26.830 "uuid": "41aa0cc3-3632-4eb3-93aa-c540c91aac4c", 00:16:26.830 "assigned_rate_limits": { 00:16:26.830 "rw_ios_per_sec": 0, 00:16:26.830 "rw_mbytes_per_sec": 0, 00:16:26.830 "r_mbytes_per_sec": 0, 00:16:26.830 "w_mbytes_per_sec": 0 00:16:26.830 }, 00:16:26.830 "claimed": true, 00:16:26.830 "claim_type": "exclusive_write", 00:16:26.830 "zoned": false, 00:16:26.830 "supported_io_types": { 00:16:26.830 "read": true, 00:16:26.830 "write": true, 00:16:26.830 "unmap": true, 00:16:26.830 "write_zeroes": true, 00:16:26.830 "flush": true, 00:16:26.830 "reset": true, 00:16:26.830 "compare": false, 00:16:26.830 "compare_and_write": false, 00:16:26.830 "abort": true, 00:16:26.830 "nvme_admin": false, 00:16:26.830 "nvme_io": false 00:16:26.830 }, 00:16:26.830 "memory_domains": [ 00:16:26.830 { 00:16:26.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.830 "dma_device_type": 2 00:16:26.830 } 00:16:26.830 ], 00:16:26.830 "driver_specific": {} 00:16:26.830 } 00:16:26.830 ] 00:16:26.830 04:58:56 -- common/autotest_common.sh@895 -- # return 0 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.830 04:58:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.087 04:58:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.087 "name": "Existed_Raid", 00:16:27.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.087 "strip_size_kb": 0, 00:16:27.087 "state": "configuring", 00:16:27.087 "raid_level": "raid1", 00:16:27.087 "superblock": false, 00:16:27.087 "num_base_bdevs": 2, 00:16:27.087 "num_base_bdevs_discovered": 1, 00:16:27.087 "num_base_bdevs_operational": 2, 00:16:27.087 "base_bdevs_list": [ 00:16:27.087 { 00:16:27.087 "name": "BaseBdev1", 00:16:27.087 "uuid": "41aa0cc3-3632-4eb3-93aa-c540c91aac4c", 00:16:27.087 "is_configured": true, 00:16:27.087 "data_offset": 0, 00:16:27.087 "data_size": 65536 00:16:27.087 }, 00:16:27.087 { 00:16:27.087 "name": "BaseBdev2", 00:16:27.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.087 "is_configured": false, 00:16:27.087 "data_offset": 0, 00:16:27.087 "data_size": 0 00:16:27.087 } 00:16:27.087 ] 00:16:27.087 }' 00:16:27.087 04:58:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.087 04:58:56 -- common/autotest_common.sh@10 -- # set +x 00:16:28.032 04:58:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:28.032 [2024-04-27 04:58:57.811484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.032 [2024-04-27 04:58:57.811757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:28.032 04:58:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:28.032 04:58:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:28.290 [2024-04-27 04:58:58.055621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.290 [2024-04-27 04:58:58.058403] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.290 [2024-04-27 04:58:58.058596] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.290 04:58:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.548 04:58:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.548 "name": "Existed_Raid", 00:16:28.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.548 "strip_size_kb": 0, 00:16:28.548 "state": "configuring", 00:16:28.548 "raid_level": "raid1", 00:16:28.548 "superblock": false, 00:16:28.548 "num_base_bdevs": 2, 00:16:28.548 "num_base_bdevs_discovered": 1, 00:16:28.548 "num_base_bdevs_operational": 2, 00:16:28.548 "base_bdevs_list": [ 00:16:28.548 { 00:16:28.548 "name": "BaseBdev1", 00:16:28.548 "uuid": "41aa0cc3-3632-4eb3-93aa-c540c91aac4c", 00:16:28.548 "is_configured": true, 00:16:28.548 "data_offset": 0, 00:16:28.548 "data_size": 65536 00:16:28.548 }, 00:16:28.548 { 00:16:28.548 "name": "BaseBdev2", 00:16:28.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.548 "is_configured": false, 00:16:28.548 "data_offset": 0, 00:16:28.548 "data_size": 0 00:16:28.548 } 00:16:28.548 ] 00:16:28.548 }' 00:16:28.548 04:58:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.548 04:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:29.114 04:58:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.681 [2024-04-27 04:58:59.294096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.681 [2024-04-27 04:58:59.294533] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:29.681 [2024-04-27 04:58:59.294596] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:29.681 [2024-04-27 04:58:59.294971] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:29.681 [2024-04-27 04:58:59.295730] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:29.681 [2024-04-27 04:58:59.295890] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:29.681 [2024-04-27 04:58:59.296370] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.681 BaseBdev2 00:16:29.681 04:58:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:29.681 04:58:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:29.681 04:58:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:29.681 04:58:59 -- common/autotest_common.sh@889 -- # local i 00:16:29.681 04:58:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:29.681 04:58:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:29.681 04:58:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.940 04:58:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.940 [ 00:16:29.940 { 00:16:29.940 "name": "BaseBdev2", 00:16:29.940 "aliases": [ 00:16:29.940 "4f5a2fae-7a96-4545-ac6f-d11b81dca951" 00:16:29.940 ], 00:16:29.940 "product_name": "Malloc disk", 00:16:29.940 "block_size": 512, 00:16:29.940 "num_blocks": 65536, 00:16:29.940 "uuid": "4f5a2fae-7a96-4545-ac6f-d11b81dca951", 00:16:29.940 "assigned_rate_limits": { 00:16:29.940 "rw_ios_per_sec": 0, 00:16:29.940 "rw_mbytes_per_sec": 0, 00:16:29.940 "r_mbytes_per_sec": 0, 00:16:29.940 "w_mbytes_per_sec": 0 00:16:29.940 }, 00:16:29.940 "claimed": true, 00:16:29.940 "claim_type": "exclusive_write", 00:16:29.940 "zoned": false, 00:16:29.940 "supported_io_types": { 00:16:29.940 "read": true, 00:16:29.940 "write": true, 00:16:29.940 "unmap": true, 00:16:29.940 "write_zeroes": true, 00:16:29.940 "flush": true, 00:16:29.940 "reset": true, 00:16:29.940 "compare": false, 00:16:29.940 "compare_and_write": false, 00:16:29.940 "abort": true, 00:16:29.940 "nvme_admin": false, 00:16:29.940 "nvme_io": false 00:16:29.940 }, 00:16:29.940 "memory_domains": [ 00:16:29.940 { 00:16:29.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.940 "dma_device_type": 2 00:16:29.940 } 00:16:29.940 ], 00:16:29.940 "driver_specific": {} 00:16:29.940 } 00:16:29.940 ] 00:16:29.940 04:58:59 -- common/autotest_common.sh@895 -- # return 0 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.940 04:58:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.197 04:59:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.197 "name": "Existed_Raid", 00:16:30.197 "uuid": "c6019531-3937-4b2a-989c-fc2425c6dec5", 00:16:30.197 "strip_size_kb": 0, 00:16:30.197 "state": "online", 00:16:30.197 "raid_level": "raid1", 00:16:30.197 "superblock": false, 00:16:30.197 "num_base_bdevs": 2, 00:16:30.197 "num_base_bdevs_discovered": 2, 00:16:30.197 "num_base_bdevs_operational": 2, 00:16:30.197 "base_bdevs_list": [ 00:16:30.197 { 00:16:30.197 "name": "BaseBdev1", 00:16:30.197 "uuid": "41aa0cc3-3632-4eb3-93aa-c540c91aac4c", 00:16:30.197 "is_configured": true, 00:16:30.197 "data_offset": 0, 00:16:30.197 "data_size": 65536 00:16:30.197 }, 00:16:30.197 { 00:16:30.197 "name": "BaseBdev2", 00:16:30.197 "uuid": "4f5a2fae-7a96-4545-ac6f-d11b81dca951", 00:16:30.197 "is_configured": true, 00:16:30.197 "data_offset": 0, 00:16:30.197 "data_size": 65536 00:16:30.197 } 00:16:30.197 ] 00:16:30.197 }' 00:16:30.197 04:59:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.197 04:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:31.130 [2024-04-27 04:59:00.942708] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.130 04:59:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.388 04:59:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.388 "name": "Existed_Raid", 00:16:31.388 "uuid": "c6019531-3937-4b2a-989c-fc2425c6dec5", 00:16:31.388 "strip_size_kb": 0, 00:16:31.388 "state": "online", 00:16:31.388 "raid_level": "raid1", 00:16:31.388 "superblock": false, 00:16:31.388 "num_base_bdevs": 2, 00:16:31.388 "num_base_bdevs_discovered": 1, 00:16:31.388 "num_base_bdevs_operational": 1, 00:16:31.388 "base_bdevs_list": [ 00:16:31.388 { 00:16:31.388 "name": null, 00:16:31.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.388 "is_configured": false, 00:16:31.388 "data_offset": 0, 00:16:31.388 "data_size": 65536 00:16:31.388 }, 00:16:31.388 { 00:16:31.388 "name": "BaseBdev2", 00:16:31.388 "uuid": "4f5a2fae-7a96-4545-ac6f-d11b81dca951", 00:16:31.388 "is_configured": true, 00:16:31.388 "data_offset": 0, 00:16:31.388 "data_size": 65536 00:16:31.388 } 00:16:31.388 ] 00:16:31.388 }' 00:16:31.388 04:59:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.388 04:59:01 -- common/autotest_common.sh@10 -- # set +x 00:16:32.321 04:59:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:32.321 04:59:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:32.321 04:59:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.321 04:59:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:32.321 04:59:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:32.321 04:59:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.321 04:59:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:32.579 [2024-04-27 04:59:02.413120] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.579 [2024-04-27 04:59:02.413168] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.579 [2024-04-27 04:59:02.413291] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.579 [2024-04-27 04:59:02.428285] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.579 [2024-04-27 04:59:02.428327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:32.579 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:32.579 04:59:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:32.579 04:59:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.579 04:59:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.838 04:59:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:32.838 04:59:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:32.838 04:59:02 -- bdev/bdev_raid.sh@287 -- # killprocess 126118 00:16:32.838 04:59:02 -- common/autotest_common.sh@926 -- # '[' -z 126118 ']' 00:16:32.838 04:59:02 -- common/autotest_common.sh@930 -- # kill -0 126118 00:16:32.838 04:59:02 -- common/autotest_common.sh@931 -- # uname 00:16:32.838 04:59:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:32.838 04:59:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126118 00:16:32.838 killing process with pid 126118 00:16:32.838 04:59:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:32.838 04:59:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:32.838 04:59:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126118' 00:16:32.838 04:59:02 -- common/autotest_common.sh@945 -- # kill 126118 00:16:32.838 04:59:02 -- common/autotest_common.sh@950 -- # wait 126118 00:16:32.838 [2024-04-27 04:59:02.718258] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.838 [2024-04-27 04:59:02.718415] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.404 ************************************ 00:16:33.404 END TEST raid_state_function_test 00:16:33.404 ************************************ 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:33.404 00:16:33.404 real 0m9.925s 00:16:33.404 user 0m17.851s 00:16:33.404 sys 0m1.392s 00:16:33.404 04:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.404 04:59:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:33.404 04:59:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:33.404 04:59:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.404 04:59:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.404 ************************************ 00:16:33.404 START TEST raid_state_function_test_sb 00:16:33.404 ************************************ 00:16:33.404 04:59:03 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=126433 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126433' 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:33.404 Process raid pid: 126433 00:16:33.404 04:59:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126433 /var/tmp/spdk-raid.sock 00:16:33.404 04:59:03 -- common/autotest_common.sh@819 -- # '[' -z 126433 ']' 00:16:33.404 04:59:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:33.404 04:59:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:33.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:33.404 04:59:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:33.404 04:59:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:33.404 04:59:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.404 [2024-04-27 04:59:03.182899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:33.404 [2024-04-27 04:59:03.183167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.662 [2024-04-27 04:59:03.339780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.662 [2024-04-27 04:59:03.453756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.662 [2024-04-27 04:59:03.537215] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.233 04:59:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.233 04:59:04 -- common/autotest_common.sh@852 -- # return 0 00:16:34.233 04:59:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:34.499 [2024-04-27 04:59:04.346130] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.499 [2024-04-27 04:59:04.346250] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.499 [2024-04-27 04:59:04.346267] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.499 [2024-04-27 04:59:04.346317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.499 04:59:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.756 04:59:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.756 "name": "Existed_Raid", 00:16:34.756 "uuid": "8bf52946-3c71-46b8-9119-9ad9770f3edf", 00:16:34.756 "strip_size_kb": 0, 00:16:34.756 "state": "configuring", 00:16:34.756 "raid_level": "raid1", 00:16:34.756 "superblock": true, 00:16:34.756 "num_base_bdevs": 2, 00:16:34.756 "num_base_bdevs_discovered": 0, 00:16:34.756 "num_base_bdevs_operational": 2, 00:16:34.756 "base_bdevs_list": [ 00:16:34.756 { 00:16:34.756 "name": "BaseBdev1", 00:16:34.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.756 "is_configured": false, 00:16:34.756 "data_offset": 0, 00:16:34.756 "data_size": 0 00:16:34.756 }, 00:16:34.756 { 00:16:34.756 "name": "BaseBdev2", 00:16:34.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.756 "is_configured": false, 00:16:34.756 "data_offset": 0, 00:16:34.756 "data_size": 0 00:16:34.756 } 00:16:34.756 ] 00:16:34.756 }' 00:16:34.756 04:59:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.756 04:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:35.688 04:59:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:35.688 [2024-04-27 04:59:05.518179] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.688 [2024-04-27 04:59:05.518257] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:35.688 04:59:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:35.946 [2024-04-27 04:59:05.786279] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.946 [2024-04-27 04:59:05.786410] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.946 [2024-04-27 04:59:05.786426] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.946 [2024-04-27 04:59:05.786457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.947 04:59:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.205 [2024-04-27 04:59:06.077167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.205 BaseBdev1 00:16:36.205 04:59:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:36.205 04:59:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:36.205 04:59:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:36.205 04:59:06 -- common/autotest_common.sh@889 -- # local i 00:16:36.205 04:59:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:36.205 04:59:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:36.205 04:59:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.463 04:59:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.721 [ 00:16:36.721 { 00:16:36.721 "name": "BaseBdev1", 00:16:36.721 "aliases": [ 00:16:36.721 "526a9da9-73cd-4717-9a45-19dd73b0214f" 00:16:36.721 ], 00:16:36.721 "product_name": "Malloc disk", 00:16:36.721 "block_size": 512, 00:16:36.721 "num_blocks": 65536, 00:16:36.721 "uuid": "526a9da9-73cd-4717-9a45-19dd73b0214f", 00:16:36.721 "assigned_rate_limits": { 00:16:36.721 "rw_ios_per_sec": 0, 00:16:36.721 "rw_mbytes_per_sec": 0, 00:16:36.721 "r_mbytes_per_sec": 0, 00:16:36.721 "w_mbytes_per_sec": 0 00:16:36.721 }, 00:16:36.721 "claimed": true, 00:16:36.721 "claim_type": "exclusive_write", 00:16:36.721 "zoned": false, 00:16:36.721 "supported_io_types": { 00:16:36.721 "read": true, 00:16:36.721 "write": true, 00:16:36.721 "unmap": true, 00:16:36.721 "write_zeroes": true, 00:16:36.721 "flush": true, 00:16:36.721 "reset": true, 00:16:36.721 "compare": false, 00:16:36.721 "compare_and_write": false, 00:16:36.721 "abort": true, 00:16:36.721 "nvme_admin": false, 00:16:36.721 "nvme_io": false 00:16:36.721 }, 00:16:36.721 "memory_domains": [ 00:16:36.721 { 00:16:36.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.721 "dma_device_type": 2 00:16:36.721 } 00:16:36.721 ], 00:16:36.721 "driver_specific": {} 00:16:36.721 } 00:16:36.721 ] 00:16:36.721 04:59:06 -- common/autotest_common.sh@895 -- # return 0 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.721 04:59:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.980 04:59:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.980 "name": "Existed_Raid", 00:16:36.980 "uuid": "a2117975-ae99-4497-ac1e-fa14ece275d3", 00:16:36.980 "strip_size_kb": 0, 00:16:36.980 "state": "configuring", 00:16:36.980 "raid_level": "raid1", 00:16:36.980 "superblock": true, 00:16:36.980 "num_base_bdevs": 2, 00:16:36.980 "num_base_bdevs_discovered": 1, 00:16:36.980 "num_base_bdevs_operational": 2, 00:16:36.980 "base_bdevs_list": [ 00:16:36.980 { 00:16:36.980 "name": "BaseBdev1", 00:16:36.980 "uuid": "526a9da9-73cd-4717-9a45-19dd73b0214f", 00:16:36.980 "is_configured": true, 00:16:36.980 "data_offset": 2048, 00:16:36.980 "data_size": 63488 00:16:36.980 }, 00:16:36.980 { 00:16:36.980 "name": "BaseBdev2", 00:16:36.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.980 "is_configured": false, 00:16:36.980 "data_offset": 0, 00:16:36.981 "data_size": 0 00:16:36.981 } 00:16:36.981 ] 00:16:36.981 }' 00:16:36.981 04:59:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.981 04:59:06 -- common/autotest_common.sh@10 -- # set +x 00:16:37.915 04:59:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.173 [2024-04-27 04:59:07.817723] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.174 [2024-04-27 04:59:07.817815] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:38.174 04:59:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:38.174 04:59:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:38.432 04:59:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.691 BaseBdev1 00:16:38.691 04:59:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:38.691 04:59:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:38.691 04:59:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:38.691 04:59:08 -- common/autotest_common.sh@889 -- # local i 00:16:38.691 04:59:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:38.691 04:59:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:38.691 04:59:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.949 04:59:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.207 [ 00:16:39.207 { 00:16:39.207 "name": "BaseBdev1", 00:16:39.207 "aliases": [ 00:16:39.207 "2ca3ae7a-5c88-4cc0-9427-54c6a95ba97b" 00:16:39.207 ], 00:16:39.207 "product_name": "Malloc disk", 00:16:39.207 "block_size": 512, 00:16:39.207 "num_blocks": 65536, 00:16:39.207 "uuid": "2ca3ae7a-5c88-4cc0-9427-54c6a95ba97b", 00:16:39.207 "assigned_rate_limits": { 00:16:39.207 "rw_ios_per_sec": 0, 00:16:39.207 "rw_mbytes_per_sec": 0, 00:16:39.207 "r_mbytes_per_sec": 0, 00:16:39.207 "w_mbytes_per_sec": 0 00:16:39.207 }, 00:16:39.207 "claimed": false, 00:16:39.207 "zoned": false, 00:16:39.207 "supported_io_types": { 00:16:39.207 "read": true, 00:16:39.207 "write": true, 00:16:39.207 "unmap": true, 00:16:39.207 "write_zeroes": true, 00:16:39.207 "flush": true, 00:16:39.207 "reset": true, 00:16:39.207 "compare": false, 00:16:39.207 "compare_and_write": false, 00:16:39.207 "abort": true, 00:16:39.207 "nvme_admin": false, 00:16:39.207 "nvme_io": false 00:16:39.207 }, 00:16:39.207 "memory_domains": [ 00:16:39.207 { 00:16:39.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.207 "dma_device_type": 2 00:16:39.207 } 00:16:39.207 ], 00:16:39.207 "driver_specific": {} 00:16:39.207 } 00:16:39.207 ] 00:16:39.207 04:59:08 -- common/autotest_common.sh@895 -- # return 0 00:16:39.207 04:59:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:39.466 [2024-04-27 04:59:09.110000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.466 [2024-04-27 04:59:09.112530] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.466 [2024-04-27 04:59:09.112617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.466 04:59:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.725 04:59:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.725 "name": "Existed_Raid", 00:16:39.725 "uuid": "200a3150-c823-4a0c-b099-f44aaacd6d18", 00:16:39.725 "strip_size_kb": 0, 00:16:39.725 "state": "configuring", 00:16:39.725 "raid_level": "raid1", 00:16:39.725 "superblock": true, 00:16:39.725 "num_base_bdevs": 2, 00:16:39.725 "num_base_bdevs_discovered": 1, 00:16:39.725 "num_base_bdevs_operational": 2, 00:16:39.725 "base_bdevs_list": [ 00:16:39.725 { 00:16:39.725 "name": "BaseBdev1", 00:16:39.725 "uuid": "2ca3ae7a-5c88-4cc0-9427-54c6a95ba97b", 00:16:39.725 "is_configured": true, 00:16:39.725 "data_offset": 2048, 00:16:39.725 "data_size": 63488 00:16:39.725 }, 00:16:39.725 { 00:16:39.725 "name": "BaseBdev2", 00:16:39.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.725 "is_configured": false, 00:16:39.725 "data_offset": 0, 00:16:39.725 "data_size": 0 00:16:39.725 } 00:16:39.725 ] 00:16:39.725 }' 00:16:39.725 04:59:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.725 04:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:40.292 04:59:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.550 [2024-04-27 04:59:10.290320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.550 [2024-04-27 04:59:10.290705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:40.550 [2024-04-27 04:59:10.290744] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:40.550 [2024-04-27 04:59:10.291005] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:40.550 [2024-04-27 04:59:10.291668] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:40.551 [2024-04-27 04:59:10.291704] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:40.551 [2024-04-27 04:59:10.291991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.551 BaseBdev2 00:16:40.551 04:59:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:40.551 04:59:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:40.551 04:59:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:40.551 04:59:10 -- common/autotest_common.sh@889 -- # local i 00:16:40.551 04:59:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:40.551 04:59:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:40.551 04:59:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.809 04:59:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.068 [ 00:16:41.068 { 00:16:41.068 "name": "BaseBdev2", 00:16:41.068 "aliases": [ 00:16:41.068 "a6a7366e-9da1-4707-bcbb-8f52e3e5fe99" 00:16:41.068 ], 00:16:41.068 "product_name": "Malloc disk", 00:16:41.068 "block_size": 512, 00:16:41.068 "num_blocks": 65536, 00:16:41.068 "uuid": "a6a7366e-9da1-4707-bcbb-8f52e3e5fe99", 00:16:41.068 "assigned_rate_limits": { 00:16:41.068 "rw_ios_per_sec": 0, 00:16:41.068 "rw_mbytes_per_sec": 0, 00:16:41.068 "r_mbytes_per_sec": 0, 00:16:41.068 "w_mbytes_per_sec": 0 00:16:41.068 }, 00:16:41.068 "claimed": true, 00:16:41.068 "claim_type": "exclusive_write", 00:16:41.068 "zoned": false, 00:16:41.068 "supported_io_types": { 00:16:41.068 "read": true, 00:16:41.068 "write": true, 00:16:41.068 "unmap": true, 00:16:41.068 "write_zeroes": true, 00:16:41.068 "flush": true, 00:16:41.068 "reset": true, 00:16:41.068 "compare": false, 00:16:41.068 "compare_and_write": false, 00:16:41.068 "abort": true, 00:16:41.068 "nvme_admin": false, 00:16:41.068 "nvme_io": false 00:16:41.068 }, 00:16:41.068 "memory_domains": [ 00:16:41.068 { 00:16:41.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.068 "dma_device_type": 2 00:16:41.068 } 00:16:41.068 ], 00:16:41.068 "driver_specific": {} 00:16:41.068 } 00:16:41.068 ] 00:16:41.068 04:59:10 -- common/autotest_common.sh@895 -- # return 0 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.068 04:59:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.339 04:59:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.339 "name": "Existed_Raid", 00:16:41.339 "uuid": "200a3150-c823-4a0c-b099-f44aaacd6d18", 00:16:41.339 "strip_size_kb": 0, 00:16:41.339 "state": "online", 00:16:41.339 "raid_level": "raid1", 00:16:41.339 "superblock": true, 00:16:41.339 "num_base_bdevs": 2, 00:16:41.339 "num_base_bdevs_discovered": 2, 00:16:41.339 "num_base_bdevs_operational": 2, 00:16:41.339 "base_bdevs_list": [ 00:16:41.339 { 00:16:41.339 "name": "BaseBdev1", 00:16:41.339 "uuid": "2ca3ae7a-5c88-4cc0-9427-54c6a95ba97b", 00:16:41.339 "is_configured": true, 00:16:41.339 "data_offset": 2048, 00:16:41.339 "data_size": 63488 00:16:41.339 }, 00:16:41.339 { 00:16:41.339 "name": "BaseBdev2", 00:16:41.339 "uuid": "a6a7366e-9da1-4707-bcbb-8f52e3e5fe99", 00:16:41.339 "is_configured": true, 00:16:41.339 "data_offset": 2048, 00:16:41.339 "data_size": 63488 00:16:41.339 } 00:16:41.339 ] 00:16:41.339 }' 00:16:41.339 04:59:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.339 04:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:41.929 04:59:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.187 [2024-04-27 04:59:12.074986] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.446 04:59:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.704 04:59:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.704 "name": "Existed_Raid", 00:16:42.704 "uuid": "200a3150-c823-4a0c-b099-f44aaacd6d18", 00:16:42.704 "strip_size_kb": 0, 00:16:42.704 "state": "online", 00:16:42.704 "raid_level": "raid1", 00:16:42.704 "superblock": true, 00:16:42.704 "num_base_bdevs": 2, 00:16:42.704 "num_base_bdevs_discovered": 1, 00:16:42.704 "num_base_bdevs_operational": 1, 00:16:42.704 "base_bdevs_list": [ 00:16:42.704 { 00:16:42.704 "name": null, 00:16:42.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.704 "is_configured": false, 00:16:42.704 "data_offset": 2048, 00:16:42.704 "data_size": 63488 00:16:42.704 }, 00:16:42.704 { 00:16:42.704 "name": "BaseBdev2", 00:16:42.704 "uuid": "a6a7366e-9da1-4707-bcbb-8f52e3e5fe99", 00:16:42.704 "is_configured": true, 00:16:42.704 "data_offset": 2048, 00:16:42.704 "data_size": 63488 00:16:42.704 } 00:16:42.704 ] 00:16:42.704 }' 00:16:42.704 04:59:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.704 04:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:43.270 04:59:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:43.270 04:59:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.270 04:59:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.270 04:59:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:43.528 04:59:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:43.528 04:59:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.528 04:59:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:43.787 [2024-04-27 04:59:13.569292] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.787 [2024-04-27 04:59:13.569375] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.787 [2024-04-27 04:59:13.569485] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.787 [2024-04-27 04:59:13.593192] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.787 [2024-04-27 04:59:13.593250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:43.787 04:59:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:43.787 04:59:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.787 04:59:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.787 04:59:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.045 04:59:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:44.045 04:59:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:44.045 04:59:13 -- bdev/bdev_raid.sh@287 -- # killprocess 126433 00:16:44.045 04:59:13 -- common/autotest_common.sh@926 -- # '[' -z 126433 ']' 00:16:44.045 04:59:13 -- common/autotest_common.sh@930 -- # kill -0 126433 00:16:44.045 04:59:13 -- common/autotest_common.sh@931 -- # uname 00:16:44.045 04:59:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.045 04:59:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126433 00:16:44.045 killing process with pid 126433 00:16:44.045 04:59:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:44.045 04:59:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:44.045 04:59:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126433' 00:16:44.045 04:59:13 -- common/autotest_common.sh@945 -- # kill 126433 00:16:44.045 04:59:13 -- common/autotest_common.sh@950 -- # wait 126433 00:16:44.045 [2024-04-27 04:59:13.915281] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.045 [2024-04-27 04:59:13.915393] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.612 ************************************ 00:16:44.612 END TEST raid_state_function_test_sb 00:16:44.612 ************************************ 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:44.612 00:16:44.612 real 0m11.183s 00:16:44.612 user 0m20.078s 00:16:44.612 sys 0m1.539s 00:16:44.612 04:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.612 04:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:44.612 04:59:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:44.612 04:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:44.612 04:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:44.612 ************************************ 00:16:44.612 START TEST raid_superblock_test 00:16:44.612 ************************************ 00:16:44.612 04:59:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:44.612 04:59:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=126769 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126769 /var/tmp/spdk-raid.sock 00:16:44.613 04:59:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:44.613 04:59:14 -- common/autotest_common.sh@819 -- # '[' -z 126769 ']' 00:16:44.613 04:59:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:44.613 04:59:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.613 04:59:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:44.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:44.613 04:59:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.613 04:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:44.613 [2024-04-27 04:59:14.435897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:44.613 [2024-04-27 04:59:14.436214] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126769 ] 00:16:44.870 [2024-04-27 04:59:14.606650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.870 [2024-04-27 04:59:14.731467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.129 [2024-04-27 04:59:14.817082] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.696 04:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.696 04:59:15 -- common/autotest_common.sh@852 -- # return 0 00:16:45.696 04:59:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:45.696 04:59:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:45.696 04:59:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:45.696 04:59:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:45.697 04:59:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:45.697 04:59:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:45.697 04:59:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:45.697 04:59:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:45.697 04:59:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:45.955 malloc1 00:16:45.955 04:59:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.213 [2024-04-27 04:59:15.894390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.213 [2024-04-27 04:59:15.894524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.214 [2024-04-27 04:59:15.894582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:46.214 [2024-04-27 04:59:15.894641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.214 [2024-04-27 04:59:15.897675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.214 [2024-04-27 04:59:15.897756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.214 pt1 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:46.214 04:59:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:46.472 malloc2 00:16:46.472 04:59:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.730 [2024-04-27 04:59:16.399249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.730 [2024-04-27 04:59:16.399379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.730 [2024-04-27 04:59:16.399444] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:46.730 [2024-04-27 04:59:16.399526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.730 [2024-04-27 04:59:16.402351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.730 [2024-04-27 04:59:16.402422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.730 pt2 00:16:46.730 04:59:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:46.730 04:59:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:46.730 04:59:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:46.988 [2024-04-27 04:59:16.639448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:46.988 [2024-04-27 04:59:16.642060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.988 [2024-04-27 04:59:16.642323] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:16:46.989 [2024-04-27 04:59:16.642341] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:46.989 [2024-04-27 04:59:16.642550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:46.989 [2024-04-27 04:59:16.643090] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:16:46.989 [2024-04-27 04:59:16.643116] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:16:46.989 [2024-04-27 04:59:16.643372] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.989 04:59:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.247 04:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.247 "name": "raid_bdev1", 00:16:47.247 "uuid": "e7e7d517-a6dd-4d9d-9cae-10a5a920ef04", 00:16:47.247 "strip_size_kb": 0, 00:16:47.247 "state": "online", 00:16:47.247 "raid_level": "raid1", 00:16:47.247 "superblock": true, 00:16:47.247 "num_base_bdevs": 2, 00:16:47.247 "num_base_bdevs_discovered": 2, 00:16:47.247 "num_base_bdevs_operational": 2, 00:16:47.247 "base_bdevs_list": [ 00:16:47.247 { 00:16:47.247 "name": "pt1", 00:16:47.247 "uuid": "a16f0a33-05ae-5d84-911b-cf589b3a91dd", 00:16:47.247 "is_configured": true, 00:16:47.247 "data_offset": 2048, 00:16:47.247 "data_size": 63488 00:16:47.247 }, 00:16:47.247 { 00:16:47.247 "name": "pt2", 00:16:47.247 "uuid": "3b38be8a-74f6-5d6b-aee0-f6c341dfaf6e", 00:16:47.247 "is_configured": true, 00:16:47.247 "data_offset": 2048, 00:16:47.247 "data_size": 63488 00:16:47.247 } 00:16:47.247 ] 00:16:47.247 }' 00:16:47.247 04:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.247 04:59:16 -- common/autotest_common.sh@10 -- # set +x 00:16:47.812 04:59:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:47.812 04:59:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:48.104 [2024-04-27 04:59:17.763894] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.104 04:59:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 00:16:48.104 04:59:17 -- bdev/bdev_raid.sh@380 -- # '[' -z e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 ']' 00:16:48.104 04:59:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:48.361 [2024-04-27 04:59:18.031702] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.361 [2024-04-27 04:59:18.031761] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.361 [2024-04-27 04:59:18.031922] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.361 [2024-04-27 04:59:18.032021] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.361 [2024-04-27 04:59:18.032037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:16:48.361 04:59:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.361 04:59:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:48.619 04:59:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:48.619 04:59:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:48.619 04:59:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.619 04:59:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:48.877 04:59:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.877 04:59:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:49.135 04:59:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:49.135 04:59:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:49.393 04:59:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:49.393 04:59:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:49.393 04:59:19 -- common/autotest_common.sh@640 -- # local es=0 00:16:49.393 04:59:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:49.393 04:59:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.393 04:59:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.393 04:59:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.393 04:59:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.393 04:59:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.393 04:59:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:49.393 04:59:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.393 04:59:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:49.393 04:59:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:49.393 [2024-04-27 04:59:19.283950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:49.393 [2024-04-27 04:59:19.286534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:49.393 [2024-04-27 04:59:19.286632] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:49.393 [2024-04-27 04:59:19.286731] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:49.393 [2024-04-27 04:59:19.286774] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.393 [2024-04-27 04:59:19.286787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:16:49.393 request: 00:16:49.393 { 00:16:49.393 "name": "raid_bdev1", 00:16:49.393 "raid_level": "raid1", 00:16:49.393 "base_bdevs": [ 00:16:49.393 "malloc1", 00:16:49.393 "malloc2" 00:16:49.393 ], 00:16:49.393 "superblock": false, 00:16:49.393 "method": "bdev_raid_create", 00:16:49.393 "req_id": 1 00:16:49.393 } 00:16:49.393 Got JSON-RPC error response 00:16:49.393 response: 00:16:49.393 { 00:16:49.393 "code": -17, 00:16:49.393 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:49.393 } 00:16:49.651 04:59:19 -- common/autotest_common.sh@643 -- # es=1 00:16:49.651 04:59:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:49.651 04:59:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:49.651 04:59:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:49.651 04:59:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.651 04:59:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:49.909 04:59:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:49.909 04:59:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:49.909 04:59:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.909 [2024-04-27 04:59:19.784005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.909 [2024-04-27 04:59:19.784166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.909 [2024-04-27 04:59:19.784222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:49.909 [2024-04-27 04:59:19.784256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.909 [2024-04-27 04:59:19.787103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.909 [2024-04-27 04:59:19.787173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.909 [2024-04-27 04:59:19.787291] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:49.909 [2024-04-27 04:59:19.787358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.909 pt1 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.167 04:59:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.425 04:59:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.425 "name": "raid_bdev1", 00:16:50.425 "uuid": "e7e7d517-a6dd-4d9d-9cae-10a5a920ef04", 00:16:50.425 "strip_size_kb": 0, 00:16:50.425 "state": "configuring", 00:16:50.425 "raid_level": "raid1", 00:16:50.425 "superblock": true, 00:16:50.425 "num_base_bdevs": 2, 00:16:50.425 "num_base_bdevs_discovered": 1, 00:16:50.425 "num_base_bdevs_operational": 2, 00:16:50.425 "base_bdevs_list": [ 00:16:50.425 { 00:16:50.425 "name": "pt1", 00:16:50.425 "uuid": "a16f0a33-05ae-5d84-911b-cf589b3a91dd", 00:16:50.425 "is_configured": true, 00:16:50.425 "data_offset": 2048, 00:16:50.425 "data_size": 63488 00:16:50.425 }, 00:16:50.425 { 00:16:50.425 "name": null, 00:16:50.425 "uuid": "3b38be8a-74f6-5d6b-aee0-f6c341dfaf6e", 00:16:50.425 "is_configured": false, 00:16:50.425 "data_offset": 2048, 00:16:50.425 "data_size": 63488 00:16:50.425 } 00:16:50.425 ] 00:16:50.425 }' 00:16:50.425 04:59:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.425 04:59:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.991 04:59:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:50.991 04:59:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:50.991 04:59:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:50.991 04:59:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.249 [2024-04-27 04:59:20.892296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.249 [2024-04-27 04:59:20.892459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.249 [2024-04-27 04:59:20.892514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:51.249 [2024-04-27 04:59:20.892547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.249 [2024-04-27 04:59:20.893112] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.249 [2024-04-27 04:59:20.893156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.249 [2024-04-27 04:59:20.893260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:51.249 [2024-04-27 04:59:20.893289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.249 [2024-04-27 04:59:20.893443] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:51.249 [2024-04-27 04:59:20.893458] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.249 [2024-04-27 04:59:20.893566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:51.249 [2024-04-27 04:59:20.893929] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:51.249 [2024-04-27 04:59:20.893944] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:51.249 [2024-04-27 04:59:20.894065] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.249 pt2 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.249 04:59:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.250 04:59:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.508 04:59:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.508 "name": "raid_bdev1", 00:16:51.508 "uuid": "e7e7d517-a6dd-4d9d-9cae-10a5a920ef04", 00:16:51.508 "strip_size_kb": 0, 00:16:51.508 "state": "online", 00:16:51.508 "raid_level": "raid1", 00:16:51.508 "superblock": true, 00:16:51.508 "num_base_bdevs": 2, 00:16:51.508 "num_base_bdevs_discovered": 2, 00:16:51.508 "num_base_bdevs_operational": 2, 00:16:51.508 "base_bdevs_list": [ 00:16:51.508 { 00:16:51.508 "name": "pt1", 00:16:51.508 "uuid": "a16f0a33-05ae-5d84-911b-cf589b3a91dd", 00:16:51.508 "is_configured": true, 00:16:51.508 "data_offset": 2048, 00:16:51.508 "data_size": 63488 00:16:51.508 }, 00:16:51.508 { 00:16:51.508 "name": "pt2", 00:16:51.508 "uuid": "3b38be8a-74f6-5d6b-aee0-f6c341dfaf6e", 00:16:51.508 "is_configured": true, 00:16:51.508 "data_offset": 2048, 00:16:51.508 "data_size": 63488 00:16:51.508 } 00:16:51.508 ] 00:16:51.508 }' 00:16:51.508 04:59:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.508 04:59:21 -- common/autotest_common.sh@10 -- # set +x 00:16:52.075 04:59:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.075 04:59:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:52.333 [2024-04-27 04:59:22.128877] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.333 04:59:22 -- bdev/bdev_raid.sh@430 -- # '[' e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 '!=' e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 ']' 00:16:52.333 04:59:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:52.333 04:59:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:52.333 04:59:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:52.333 04:59:22 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:52.592 [2024-04-27 04:59:22.360704] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.592 04:59:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.851 04:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.851 "name": "raid_bdev1", 00:16:52.851 "uuid": "e7e7d517-a6dd-4d9d-9cae-10a5a920ef04", 00:16:52.851 "strip_size_kb": 0, 00:16:52.851 "state": "online", 00:16:52.851 "raid_level": "raid1", 00:16:52.851 "superblock": true, 00:16:52.851 "num_base_bdevs": 2, 00:16:52.851 "num_base_bdevs_discovered": 1, 00:16:52.851 "num_base_bdevs_operational": 1, 00:16:52.851 "base_bdevs_list": [ 00:16:52.851 { 00:16:52.851 "name": null, 00:16:52.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.851 "is_configured": false, 00:16:52.851 "data_offset": 2048, 00:16:52.851 "data_size": 63488 00:16:52.851 }, 00:16:52.851 { 00:16:52.851 "name": "pt2", 00:16:52.851 "uuid": "3b38be8a-74f6-5d6b-aee0-f6c341dfaf6e", 00:16:52.851 "is_configured": true, 00:16:52.851 "data_offset": 2048, 00:16:52.851 "data_size": 63488 00:16:52.851 } 00:16:52.851 ] 00:16:52.851 }' 00:16:52.851 04:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.851 04:59:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.417 04:59:23 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:53.675 [2024-04-27 04:59:23.549148] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.675 [2024-04-27 04:59:23.549209] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.675 [2024-04-27 04:59:23.549310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.675 [2024-04-27 04:59:23.549381] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.675 [2024-04-27 04:59:23.549396] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:53.933 04:59:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:54.191 04:59:24 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.449 [2024-04-27 04:59:24.293310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.449 [2024-04-27 04:59:24.293465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.449 [2024-04-27 04:59:24.293509] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:54.449 [2024-04-27 04:59:24.293550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.449 [2024-04-27 04:59:24.296247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.449 [2024-04-27 04:59:24.296315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.449 [2024-04-27 04:59:24.296423] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:54.449 [2024-04-27 04:59:24.296475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.449 [2024-04-27 04:59:24.296624] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:54.449 [2024-04-27 04:59:24.296641] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.449 [2024-04-27 04:59:24.296720] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:54.449 [2024-04-27 04:59:24.297078] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:54.449 [2024-04-27 04:59:24.297102] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:54.449 [2024-04-27 04:59:24.297276] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.449 pt2 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.449 04:59:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.016 04:59:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.016 "name": "raid_bdev1", 00:16:55.017 "uuid": "e7e7d517-a6dd-4d9d-9cae-10a5a920ef04", 00:16:55.017 "strip_size_kb": 0, 00:16:55.017 "state": "online", 00:16:55.017 "raid_level": "raid1", 00:16:55.017 "superblock": true, 00:16:55.017 "num_base_bdevs": 2, 00:16:55.017 "num_base_bdevs_discovered": 1, 00:16:55.017 "num_base_bdevs_operational": 1, 00:16:55.017 "base_bdevs_list": [ 00:16:55.017 { 00:16:55.017 "name": null, 00:16:55.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.017 "is_configured": false, 00:16:55.017 "data_offset": 2048, 00:16:55.017 "data_size": 63488 00:16:55.017 }, 00:16:55.017 { 00:16:55.017 "name": "pt2", 00:16:55.017 "uuid": "3b38be8a-74f6-5d6b-aee0-f6c341dfaf6e", 00:16:55.017 "is_configured": true, 00:16:55.017 "data_offset": 2048, 00:16:55.017 "data_size": 63488 00:16:55.017 } 00:16:55.017 ] 00:16:55.017 }' 00:16:55.017 04:59:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.017 04:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:55.584 04:59:25 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:55.584 04:59:25 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.584 04:59:25 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:55.843 [2024-04-27 04:59:25.497444] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.843 04:59:25 -- bdev/bdev_raid.sh@506 -- # '[' e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 '!=' e7e7d517-a6dd-4d9d-9cae-10a5a920ef04 ']' 00:16:55.843 04:59:25 -- bdev/bdev_raid.sh@511 -- # killprocess 126769 00:16:55.843 04:59:25 -- common/autotest_common.sh@926 -- # '[' -z 126769 ']' 00:16:55.843 04:59:25 -- common/autotest_common.sh@930 -- # kill -0 126769 00:16:55.843 04:59:25 -- common/autotest_common.sh@931 -- # uname 00:16:55.843 04:59:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.844 04:59:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126769 00:16:55.844 killing process with pid 126769 00:16:55.844 04:59:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:55.844 04:59:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:55.844 04:59:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126769' 00:16:55.844 04:59:25 -- common/autotest_common.sh@945 -- # kill 126769 00:16:55.844 04:59:25 -- common/autotest_common.sh@950 -- # wait 126769 00:16:55.844 [2024-04-27 04:59:25.543277] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.844 [2024-04-27 04:59:25.543400] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.844 [2024-04-27 04:59:25.543477] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.844 [2024-04-27 04:59:25.543499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:55.844 [2024-04-27 04:59:25.587792] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.102 ************************************ 00:16:56.102 END TEST raid_superblock_test 00:16:56.102 ************************************ 00:16:56.102 04:59:25 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:56.102 00:16:56.102 real 0m11.613s 00:16:56.102 user 0m21.144s 00:16:56.102 sys 0m1.604s 00:16:56.102 04:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.102 04:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:56.360 04:59:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:56.360 04:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:56.360 04:59:26 -- common/autotest_common.sh@10 -- # set +x 00:16:56.360 ************************************ 00:16:56.360 START TEST raid_state_function_test 00:16:56.360 ************************************ 00:16:56.360 04:59:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:56.360 04:59:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=127121 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127121' 00:16:56.361 Process raid pid: 127121 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127121 /var/tmp/spdk-raid.sock 00:16:56.361 04:59:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:56.361 04:59:26 -- common/autotest_common.sh@819 -- # '[' -z 127121 ']' 00:16:56.361 04:59:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:56.361 04:59:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:56.361 04:59:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:56.361 04:59:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.361 04:59:26 -- common/autotest_common.sh@10 -- # set +x 00:16:56.361 [2024-04-27 04:59:26.097333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:56.361 [2024-04-27 04:59:26.097563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.619 [2024-04-27 04:59:26.264347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.619 [2024-04-27 04:59:26.393974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.619 [2024-04-27 04:59:26.479917] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.223 04:59:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.223 04:59:27 -- common/autotest_common.sh@852 -- # return 0 00:16:57.223 04:59:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.481 [2024-04-27 04:59:27.278396] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.481 [2024-04-27 04:59:27.278801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.481 [2024-04-27 04:59:27.278933] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.481 [2024-04-27 04:59:27.279004] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.481 [2024-04-27 04:59:27.279219] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.481 [2024-04-27 04:59:27.279312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.481 04:59:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.739 04:59:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.739 "name": "Existed_Raid", 00:16:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.739 "strip_size_kb": 64, 00:16:57.739 "state": "configuring", 00:16:57.739 "raid_level": "raid0", 00:16:57.739 "superblock": false, 00:16:57.739 "num_base_bdevs": 3, 00:16:57.739 "num_base_bdevs_discovered": 0, 00:16:57.739 "num_base_bdevs_operational": 3, 00:16:57.739 "base_bdevs_list": [ 00:16:57.739 { 00:16:57.739 "name": "BaseBdev1", 00:16:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.739 "is_configured": false, 00:16:57.739 "data_offset": 0, 00:16:57.739 "data_size": 0 00:16:57.739 }, 00:16:57.739 { 00:16:57.739 "name": "BaseBdev2", 00:16:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.739 "is_configured": false, 00:16:57.739 "data_offset": 0, 00:16:57.739 "data_size": 0 00:16:57.739 }, 00:16:57.739 { 00:16:57.739 "name": "BaseBdev3", 00:16:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.739 "is_configured": false, 00:16:57.739 "data_offset": 0, 00:16:57.739 "data_size": 0 00:16:57.739 } 00:16:57.739 ] 00:16:57.739 }' 00:16:57.739 04:59:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.739 04:59:27 -- common/autotest_common.sh@10 -- # set +x 00:16:58.304 04:59:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.562 [2024-04-27 04:59:28.458634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.562 [2024-04-27 04:59:28.458959] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:58.820 04:59:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:58.820 [2024-04-27 04:59:28.710691] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:58.820 [2024-04-27 04:59:28.710979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:58.820 [2024-04-27 04:59:28.711107] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.820 [2024-04-27 04:59:28.711246] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.820 [2024-04-27 04:59:28.711347] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.820 [2024-04-27 04:59:28.711477] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:59.077 04:59:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:59.336 [2024-04-27 04:59:28.999212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.336 BaseBdev1 00:16:59.336 04:59:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:59.336 04:59:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:59.336 04:59:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:59.336 04:59:29 -- common/autotest_common.sh@889 -- # local i 00:16:59.336 04:59:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:59.336 04:59:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:59.336 04:59:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.594 04:59:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:59.594 [ 00:16:59.594 { 00:16:59.594 "name": "BaseBdev1", 00:16:59.594 "aliases": [ 00:16:59.594 "5f967f4f-3b96-4765-a5f0-8ade14ab8394" 00:16:59.594 ], 00:16:59.594 "product_name": "Malloc disk", 00:16:59.594 "block_size": 512, 00:16:59.594 "num_blocks": 65536, 00:16:59.594 "uuid": "5f967f4f-3b96-4765-a5f0-8ade14ab8394", 00:16:59.594 "assigned_rate_limits": { 00:16:59.594 "rw_ios_per_sec": 0, 00:16:59.594 "rw_mbytes_per_sec": 0, 00:16:59.594 "r_mbytes_per_sec": 0, 00:16:59.594 "w_mbytes_per_sec": 0 00:16:59.594 }, 00:16:59.594 "claimed": true, 00:16:59.594 "claim_type": "exclusive_write", 00:16:59.594 "zoned": false, 00:16:59.594 "supported_io_types": { 00:16:59.594 "read": true, 00:16:59.594 "write": true, 00:16:59.594 "unmap": true, 00:16:59.594 "write_zeroes": true, 00:16:59.594 "flush": true, 00:16:59.594 "reset": true, 00:16:59.594 "compare": false, 00:16:59.594 "compare_and_write": false, 00:16:59.594 "abort": true, 00:16:59.594 "nvme_admin": false, 00:16:59.594 "nvme_io": false 00:16:59.594 }, 00:16:59.594 "memory_domains": [ 00:16:59.594 { 00:16:59.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.594 "dma_device_type": 2 00:16:59.594 } 00:16:59.594 ], 00:16:59.594 "driver_specific": {} 00:16:59.594 } 00:16:59.594 ] 00:16:59.852 04:59:29 -- common/autotest_common.sh@895 -- # return 0 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.852 "name": "Existed_Raid", 00:16:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.852 "strip_size_kb": 64, 00:16:59.852 "state": "configuring", 00:16:59.852 "raid_level": "raid0", 00:16:59.852 "superblock": false, 00:16:59.852 "num_base_bdevs": 3, 00:16:59.852 "num_base_bdevs_discovered": 1, 00:16:59.852 "num_base_bdevs_operational": 3, 00:16:59.852 "base_bdevs_list": [ 00:16:59.852 { 00:16:59.852 "name": "BaseBdev1", 00:16:59.852 "uuid": "5f967f4f-3b96-4765-a5f0-8ade14ab8394", 00:16:59.852 "is_configured": true, 00:16:59.852 "data_offset": 0, 00:16:59.852 "data_size": 65536 00:16:59.852 }, 00:16:59.852 { 00:16:59.852 "name": "BaseBdev2", 00:16:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.852 "is_configured": false, 00:16:59.852 "data_offset": 0, 00:16:59.852 "data_size": 0 00:16:59.852 }, 00:16:59.852 { 00:16:59.852 "name": "BaseBdev3", 00:16:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.852 "is_configured": false, 00:16:59.852 "data_offset": 0, 00:16:59.852 "data_size": 0 00:16:59.852 } 00:16:59.852 ] 00:16:59.852 }' 00:16:59.852 04:59:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.852 04:59:29 -- common/autotest_common.sh@10 -- # set +x 00:17:00.785 04:59:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:00.785 [2024-04-27 04:59:30.631717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.785 [2024-04-27 04:59:30.632096] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:00.785 04:59:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:00.785 04:59:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:01.043 [2024-04-27 04:59:30.899875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.043 [2024-04-27 04:59:30.902652] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.043 [2024-04-27 04:59:30.902876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.043 [2024-04-27 04:59:30.903001] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.043 [2024-04-27 04:59:30.903081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.043 04:59:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.301 04:59:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.301 "name": "Existed_Raid", 00:17:01.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.301 "strip_size_kb": 64, 00:17:01.301 "state": "configuring", 00:17:01.301 "raid_level": "raid0", 00:17:01.301 "superblock": false, 00:17:01.301 "num_base_bdevs": 3, 00:17:01.301 "num_base_bdevs_discovered": 1, 00:17:01.301 "num_base_bdevs_operational": 3, 00:17:01.301 "base_bdevs_list": [ 00:17:01.301 { 00:17:01.301 "name": "BaseBdev1", 00:17:01.301 "uuid": "5f967f4f-3b96-4765-a5f0-8ade14ab8394", 00:17:01.301 "is_configured": true, 00:17:01.301 "data_offset": 0, 00:17:01.301 "data_size": 65536 00:17:01.301 }, 00:17:01.301 { 00:17:01.301 "name": "BaseBdev2", 00:17:01.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.301 "is_configured": false, 00:17:01.301 "data_offset": 0, 00:17:01.301 "data_size": 0 00:17:01.301 }, 00:17:01.301 { 00:17:01.301 "name": "BaseBdev3", 00:17:01.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.301 "is_configured": false, 00:17:01.301 "data_offset": 0, 00:17:01.301 "data_size": 0 00:17:01.301 } 00:17:01.301 ] 00:17:01.301 }' 00:17:01.301 04:59:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.301 04:59:31 -- common/autotest_common.sh@10 -- # set +x 00:17:02.235 04:59:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.235 [2024-04-27 04:59:32.098002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.235 BaseBdev2 00:17:02.235 04:59:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:02.235 04:59:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:02.235 04:59:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:02.235 04:59:32 -- common/autotest_common.sh@889 -- # local i 00:17:02.235 04:59:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:02.235 04:59:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:02.235 04:59:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.492 04:59:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.750 [ 00:17:02.750 { 00:17:02.750 "name": "BaseBdev2", 00:17:02.750 "aliases": [ 00:17:02.750 "4afa2638-0110-44e1-868b-8cffe8e6761a" 00:17:02.750 ], 00:17:02.750 "product_name": "Malloc disk", 00:17:02.750 "block_size": 512, 00:17:02.750 "num_blocks": 65536, 00:17:02.750 "uuid": "4afa2638-0110-44e1-868b-8cffe8e6761a", 00:17:02.750 "assigned_rate_limits": { 00:17:02.750 "rw_ios_per_sec": 0, 00:17:02.750 "rw_mbytes_per_sec": 0, 00:17:02.750 "r_mbytes_per_sec": 0, 00:17:02.750 "w_mbytes_per_sec": 0 00:17:02.750 }, 00:17:02.750 "claimed": true, 00:17:02.750 "claim_type": "exclusive_write", 00:17:02.750 "zoned": false, 00:17:02.750 "supported_io_types": { 00:17:02.750 "read": true, 00:17:02.750 "write": true, 00:17:02.750 "unmap": true, 00:17:02.750 "write_zeroes": true, 00:17:02.750 "flush": true, 00:17:02.750 "reset": true, 00:17:02.750 "compare": false, 00:17:02.750 "compare_and_write": false, 00:17:02.750 "abort": true, 00:17:02.750 "nvme_admin": false, 00:17:02.750 "nvme_io": false 00:17:02.750 }, 00:17:02.750 "memory_domains": [ 00:17:02.750 { 00:17:02.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.750 "dma_device_type": 2 00:17:02.750 } 00:17:02.750 ], 00:17:02.750 "driver_specific": {} 00:17:02.750 } 00:17:02.750 ] 00:17:02.750 04:59:32 -- common/autotest_common.sh@895 -- # return 0 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.750 04:59:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.007 04:59:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.007 "name": "Existed_Raid", 00:17:03.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.007 "strip_size_kb": 64, 00:17:03.007 "state": "configuring", 00:17:03.007 "raid_level": "raid0", 00:17:03.007 "superblock": false, 00:17:03.007 "num_base_bdevs": 3, 00:17:03.007 "num_base_bdevs_discovered": 2, 00:17:03.007 "num_base_bdevs_operational": 3, 00:17:03.007 "base_bdevs_list": [ 00:17:03.007 { 00:17:03.007 "name": "BaseBdev1", 00:17:03.007 "uuid": "5f967f4f-3b96-4765-a5f0-8ade14ab8394", 00:17:03.007 "is_configured": true, 00:17:03.007 "data_offset": 0, 00:17:03.007 "data_size": 65536 00:17:03.007 }, 00:17:03.007 { 00:17:03.007 "name": "BaseBdev2", 00:17:03.007 "uuid": "4afa2638-0110-44e1-868b-8cffe8e6761a", 00:17:03.007 "is_configured": true, 00:17:03.007 "data_offset": 0, 00:17:03.007 "data_size": 65536 00:17:03.007 }, 00:17:03.007 { 00:17:03.007 "name": "BaseBdev3", 00:17:03.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.007 "is_configured": false, 00:17:03.007 "data_offset": 0, 00:17:03.007 "data_size": 0 00:17:03.007 } 00:17:03.007 ] 00:17:03.007 }' 00:17:03.007 04:59:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.007 04:59:32 -- common/autotest_common.sh@10 -- # set +x 00:17:03.938 04:59:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:03.938 [2024-04-27 04:59:33.777271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.938 [2024-04-27 04:59:33.777339] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:03.938 [2024-04-27 04:59:33.777351] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:03.938 [2024-04-27 04:59:33.777498] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:03.938 [2024-04-27 04:59:33.777958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:03.938 [2024-04-27 04:59:33.777982] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:03.938 [2024-04-27 04:59:33.778266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.938 BaseBdev3 00:17:03.938 04:59:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:03.938 04:59:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:03.938 04:59:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:03.938 04:59:33 -- common/autotest_common.sh@889 -- # local i 00:17:03.938 04:59:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:03.938 04:59:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:03.938 04:59:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.196 04:59:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:04.454 [ 00:17:04.454 { 00:17:04.454 "name": "BaseBdev3", 00:17:04.454 "aliases": [ 00:17:04.454 "24aab809-1e85-4f56-a299-24ed50723b64" 00:17:04.454 ], 00:17:04.454 "product_name": "Malloc disk", 00:17:04.454 "block_size": 512, 00:17:04.454 "num_blocks": 65536, 00:17:04.454 "uuid": "24aab809-1e85-4f56-a299-24ed50723b64", 00:17:04.454 "assigned_rate_limits": { 00:17:04.454 "rw_ios_per_sec": 0, 00:17:04.454 "rw_mbytes_per_sec": 0, 00:17:04.454 "r_mbytes_per_sec": 0, 00:17:04.454 "w_mbytes_per_sec": 0 00:17:04.454 }, 00:17:04.454 "claimed": true, 00:17:04.454 "claim_type": "exclusive_write", 00:17:04.454 "zoned": false, 00:17:04.454 "supported_io_types": { 00:17:04.454 "read": true, 00:17:04.454 "write": true, 00:17:04.454 "unmap": true, 00:17:04.454 "write_zeroes": true, 00:17:04.454 "flush": true, 00:17:04.454 "reset": true, 00:17:04.454 "compare": false, 00:17:04.454 "compare_and_write": false, 00:17:04.454 "abort": true, 00:17:04.454 "nvme_admin": false, 00:17:04.454 "nvme_io": false 00:17:04.454 }, 00:17:04.454 "memory_domains": [ 00:17:04.454 { 00:17:04.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.454 "dma_device_type": 2 00:17:04.454 } 00:17:04.454 ], 00:17:04.454 "driver_specific": {} 00:17:04.454 } 00:17:04.454 ] 00:17:04.454 04:59:34 -- common/autotest_common.sh@895 -- # return 0 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.454 04:59:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.712 04:59:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.712 "name": "Existed_Raid", 00:17:04.712 "uuid": "f50c0b38-5242-42ab-9d77-0aea45a9510d", 00:17:04.712 "strip_size_kb": 64, 00:17:04.712 "state": "online", 00:17:04.712 "raid_level": "raid0", 00:17:04.712 "superblock": false, 00:17:04.712 "num_base_bdevs": 3, 00:17:04.712 "num_base_bdevs_discovered": 3, 00:17:04.712 "num_base_bdevs_operational": 3, 00:17:04.712 "base_bdevs_list": [ 00:17:04.712 { 00:17:04.712 "name": "BaseBdev1", 00:17:04.712 "uuid": "5f967f4f-3b96-4765-a5f0-8ade14ab8394", 00:17:04.712 "is_configured": true, 00:17:04.712 "data_offset": 0, 00:17:04.712 "data_size": 65536 00:17:04.712 }, 00:17:04.712 { 00:17:04.712 "name": "BaseBdev2", 00:17:04.712 "uuid": "4afa2638-0110-44e1-868b-8cffe8e6761a", 00:17:04.712 "is_configured": true, 00:17:04.712 "data_offset": 0, 00:17:04.712 "data_size": 65536 00:17:04.712 }, 00:17:04.712 { 00:17:04.712 "name": "BaseBdev3", 00:17:04.712 "uuid": "24aab809-1e85-4f56-a299-24ed50723b64", 00:17:04.712 "is_configured": true, 00:17:04.712 "data_offset": 0, 00:17:04.712 "data_size": 65536 00:17:04.712 } 00:17:04.712 ] 00:17:04.712 }' 00:17:04.712 04:59:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.712 04:59:34 -- common/autotest_common.sh@10 -- # set +x 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:05.647 [2024-04-27 04:59:35.425935] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.647 [2024-04-27 04:59:35.425998] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.647 [2024-04-27 04:59:35.426095] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.647 04:59:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.905 04:59:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.905 "name": "Existed_Raid", 00:17:05.905 "uuid": "f50c0b38-5242-42ab-9d77-0aea45a9510d", 00:17:05.905 "strip_size_kb": 64, 00:17:05.905 "state": "offline", 00:17:05.905 "raid_level": "raid0", 00:17:05.905 "superblock": false, 00:17:05.905 "num_base_bdevs": 3, 00:17:05.905 "num_base_bdevs_discovered": 2, 00:17:05.905 "num_base_bdevs_operational": 2, 00:17:05.905 "base_bdevs_list": [ 00:17:05.905 { 00:17:05.905 "name": null, 00:17:05.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.905 "is_configured": false, 00:17:05.905 "data_offset": 0, 00:17:05.905 "data_size": 65536 00:17:05.905 }, 00:17:05.905 { 00:17:05.905 "name": "BaseBdev2", 00:17:05.905 "uuid": "4afa2638-0110-44e1-868b-8cffe8e6761a", 00:17:05.905 "is_configured": true, 00:17:05.905 "data_offset": 0, 00:17:05.905 "data_size": 65536 00:17:05.905 }, 00:17:05.905 { 00:17:05.905 "name": "BaseBdev3", 00:17:05.905 "uuid": "24aab809-1e85-4f56-a299-24ed50723b64", 00:17:05.905 "is_configured": true, 00:17:05.905 "data_offset": 0, 00:17:05.905 "data_size": 65536 00:17:05.905 } 00:17:05.905 ] 00:17:05.905 }' 00:17:05.905 04:59:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.905 04:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:06.839 04:59:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:07.097 [2024-04-27 04:59:36.934094] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.097 04:59:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.097 04:59:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.097 04:59:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.097 04:59:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:07.355 04:59:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:07.355 04:59:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:07.355 04:59:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:07.614 [2024-04-27 04:59:37.478777] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.614 [2024-04-27 04:59:37.478879] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:07.871 04:59:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:07.871 04:59:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:07.871 04:59:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.871 04:59:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:08.128 04:59:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:08.128 04:59:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:08.128 04:59:37 -- bdev/bdev_raid.sh@287 -- # killprocess 127121 00:17:08.128 04:59:37 -- common/autotest_common.sh@926 -- # '[' -z 127121 ']' 00:17:08.128 04:59:37 -- common/autotest_common.sh@930 -- # kill -0 127121 00:17:08.128 04:59:37 -- common/autotest_common.sh@931 -- # uname 00:17:08.128 04:59:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.128 04:59:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127121 00:17:08.128 04:59:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:08.128 04:59:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:08.128 04:59:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127121' 00:17:08.128 killing process with pid 127121 00:17:08.128 04:59:37 -- common/autotest_common.sh@945 -- # kill 127121 00:17:08.128 04:59:37 -- common/autotest_common.sh@950 -- # wait 127121 00:17:08.128 [2024-04-27 04:59:37.818988] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.128 [2024-04-27 04:59:37.819119] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.385 04:59:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:08.385 00:17:08.385 real 0m12.125s 00:17:08.385 user 0m22.030s 00:17:08.385 sys 0m1.689s 00:17:08.385 04:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.386 04:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 ************************************ 00:17:08.386 END TEST raid_state_function_test 00:17:08.386 ************************************ 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:08.386 04:59:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:08.386 04:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.386 04:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 ************************************ 00:17:08.386 START TEST raid_state_function_test_sb 00:17:08.386 ************************************ 00:17:08.386 04:59:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=127504 00:17:08.386 Process raid pid: 127504 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127504' 00:17:08.386 04:59:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127504 /var/tmp/spdk-raid.sock 00:17:08.386 04:59:38 -- common/autotest_common.sh@819 -- # '[' -z 127504 ']' 00:17:08.386 04:59:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.386 04:59:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.386 04:59:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.386 04:59:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.386 04:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.643 [2024-04-27 04:59:38.289311] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:08.643 [2024-04-27 04:59:38.289553] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.643 [2024-04-27 04:59:38.456001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.920 [2024-04-27 04:59:38.573863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.920 [2024-04-27 04:59:38.649748] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.484 04:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.484 04:59:39 -- common/autotest_common.sh@852 -- # return 0 00:17:09.484 04:59:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:09.742 [2024-04-27 04:59:39.536212] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.742 [2024-04-27 04:59:39.536348] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.742 [2024-04-27 04:59:39.536365] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.742 [2024-04-27 04:59:39.536387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.742 [2024-04-27 04:59:39.536395] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.742 [2024-04-27 04:59:39.536446] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.742 04:59:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.999 04:59:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.999 "name": "Existed_Raid", 00:17:09.999 "uuid": "926606ea-10c1-4858-acd8-199a1c506ca7", 00:17:09.999 "strip_size_kb": 64, 00:17:09.999 "state": "configuring", 00:17:09.999 "raid_level": "raid0", 00:17:09.999 "superblock": true, 00:17:09.999 "num_base_bdevs": 3, 00:17:09.999 "num_base_bdevs_discovered": 0, 00:17:09.999 "num_base_bdevs_operational": 3, 00:17:09.999 "base_bdevs_list": [ 00:17:09.999 { 00:17:09.999 "name": "BaseBdev1", 00:17:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.999 "is_configured": false, 00:17:09.999 "data_offset": 0, 00:17:09.999 "data_size": 0 00:17:09.999 }, 00:17:09.999 { 00:17:09.999 "name": "BaseBdev2", 00:17:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.999 "is_configured": false, 00:17:09.999 "data_offset": 0, 00:17:09.999 "data_size": 0 00:17:09.999 }, 00:17:09.999 { 00:17:09.999 "name": "BaseBdev3", 00:17:09.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.999 "is_configured": false, 00:17:09.999 "data_offset": 0, 00:17:09.999 "data_size": 0 00:17:09.999 } 00:17:09.999 ] 00:17:09.999 }' 00:17:09.999 04:59:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.999 04:59:39 -- common/autotest_common.sh@10 -- # set +x 00:17:10.931 04:59:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.931 [2024-04-27 04:59:40.716305] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.931 [2024-04-27 04:59:40.716699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:10.931 04:59:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:11.189 [2024-04-27 04:59:40.984388] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.189 [2024-04-27 04:59:40.984792] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.189 [2024-04-27 04:59:40.984921] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.189 [2024-04-27 04:59:40.985075] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.189 [2024-04-27 04:59:40.985182] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:11.189 [2024-04-27 04:59:40.985320] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:11.189 04:59:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.447 [2024-04-27 04:59:41.223243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.447 BaseBdev1 00:17:11.447 04:59:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:11.447 04:59:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:11.447 04:59:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.447 04:59:41 -- common/autotest_common.sh@889 -- # local i 00:17:11.447 04:59:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.447 04:59:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.447 04:59:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.705 04:59:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.963 [ 00:17:11.963 { 00:17:11.963 "name": "BaseBdev1", 00:17:11.963 "aliases": [ 00:17:11.963 "badd9910-7c11-44fa-bf41-9487c9651cef" 00:17:11.963 ], 00:17:11.963 "product_name": "Malloc disk", 00:17:11.963 "block_size": 512, 00:17:11.963 "num_blocks": 65536, 00:17:11.963 "uuid": "badd9910-7c11-44fa-bf41-9487c9651cef", 00:17:11.963 "assigned_rate_limits": { 00:17:11.963 "rw_ios_per_sec": 0, 00:17:11.963 "rw_mbytes_per_sec": 0, 00:17:11.963 "r_mbytes_per_sec": 0, 00:17:11.963 "w_mbytes_per_sec": 0 00:17:11.963 }, 00:17:11.963 "claimed": true, 00:17:11.963 "claim_type": "exclusive_write", 00:17:11.963 "zoned": false, 00:17:11.963 "supported_io_types": { 00:17:11.963 "read": true, 00:17:11.963 "write": true, 00:17:11.963 "unmap": true, 00:17:11.963 "write_zeroes": true, 00:17:11.963 "flush": true, 00:17:11.963 "reset": true, 00:17:11.963 "compare": false, 00:17:11.963 "compare_and_write": false, 00:17:11.963 "abort": true, 00:17:11.963 "nvme_admin": false, 00:17:11.963 "nvme_io": false 00:17:11.963 }, 00:17:11.963 "memory_domains": [ 00:17:11.963 { 00:17:11.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.963 "dma_device_type": 2 00:17:11.963 } 00:17:11.963 ], 00:17:11.963 "driver_specific": {} 00:17:11.963 } 00:17:11.963 ] 00:17:11.963 04:59:41 -- common/autotest_common.sh@895 -- # return 0 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.963 04:59:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.220 04:59:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.220 "name": "Existed_Raid", 00:17:12.220 "uuid": "b3ccf5cf-f84f-4ac5-8a57-af690184d4d4", 00:17:12.220 "strip_size_kb": 64, 00:17:12.220 "state": "configuring", 00:17:12.220 "raid_level": "raid0", 00:17:12.220 "superblock": true, 00:17:12.220 "num_base_bdevs": 3, 00:17:12.220 "num_base_bdevs_discovered": 1, 00:17:12.220 "num_base_bdevs_operational": 3, 00:17:12.220 "base_bdevs_list": [ 00:17:12.220 { 00:17:12.220 "name": "BaseBdev1", 00:17:12.220 "uuid": "badd9910-7c11-44fa-bf41-9487c9651cef", 00:17:12.220 "is_configured": true, 00:17:12.220 "data_offset": 2048, 00:17:12.220 "data_size": 63488 00:17:12.220 }, 00:17:12.220 { 00:17:12.220 "name": "BaseBdev2", 00:17:12.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.220 "is_configured": false, 00:17:12.220 "data_offset": 0, 00:17:12.220 "data_size": 0 00:17:12.220 }, 00:17:12.220 { 00:17:12.220 "name": "BaseBdev3", 00:17:12.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.220 "is_configured": false, 00:17:12.220 "data_offset": 0, 00:17:12.220 "data_size": 0 00:17:12.220 } 00:17:12.220 ] 00:17:12.220 }' 00:17:12.220 04:59:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.220 04:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:12.786 04:59:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.044 [2024-04-27 04:59:42.827795] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.044 [2024-04-27 04:59:42.827889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:13.044 04:59:42 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:13.044 04:59:42 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:13.326 04:59:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.583 BaseBdev1 00:17:13.583 04:59:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:13.583 04:59:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:13.583 04:59:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:13.583 04:59:43 -- common/autotest_common.sh@889 -- # local i 00:17:13.583 04:59:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:13.583 04:59:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:13.583 04:59:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.841 04:59:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.099 [ 00:17:14.099 { 00:17:14.099 "name": "BaseBdev1", 00:17:14.099 "aliases": [ 00:17:14.099 "bef96b16-b9de-4693-bbf7-e292c91ad519" 00:17:14.099 ], 00:17:14.099 "product_name": "Malloc disk", 00:17:14.099 "block_size": 512, 00:17:14.099 "num_blocks": 65536, 00:17:14.099 "uuid": "bef96b16-b9de-4693-bbf7-e292c91ad519", 00:17:14.099 "assigned_rate_limits": { 00:17:14.099 "rw_ios_per_sec": 0, 00:17:14.099 "rw_mbytes_per_sec": 0, 00:17:14.099 "r_mbytes_per_sec": 0, 00:17:14.099 "w_mbytes_per_sec": 0 00:17:14.099 }, 00:17:14.099 "claimed": false, 00:17:14.099 "zoned": false, 00:17:14.099 "supported_io_types": { 00:17:14.099 "read": true, 00:17:14.099 "write": true, 00:17:14.099 "unmap": true, 00:17:14.099 "write_zeroes": true, 00:17:14.099 "flush": true, 00:17:14.099 "reset": true, 00:17:14.099 "compare": false, 00:17:14.099 "compare_and_write": false, 00:17:14.099 "abort": true, 00:17:14.099 "nvme_admin": false, 00:17:14.099 "nvme_io": false 00:17:14.099 }, 00:17:14.099 "memory_domains": [ 00:17:14.099 { 00:17:14.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.099 "dma_device_type": 2 00:17:14.099 } 00:17:14.099 ], 00:17:14.099 "driver_specific": {} 00:17:14.099 } 00:17:14.099 ] 00:17:14.099 04:59:43 -- common/autotest_common.sh@895 -- # return 0 00:17:14.099 04:59:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:14.357 [2024-04-27 04:59:44.086726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.357 [2024-04-27 04:59:44.089240] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.357 [2024-04-27 04:59:44.089322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.357 [2024-04-27 04:59:44.089337] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:14.357 [2024-04-27 04:59:44.089367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.357 04:59:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.616 04:59:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.616 "name": "Existed_Raid", 00:17:14.616 "uuid": "a593ba31-9087-47cc-8bca-14f5a506d1c1", 00:17:14.616 "strip_size_kb": 64, 00:17:14.616 "state": "configuring", 00:17:14.616 "raid_level": "raid0", 00:17:14.616 "superblock": true, 00:17:14.616 "num_base_bdevs": 3, 00:17:14.616 "num_base_bdevs_discovered": 1, 00:17:14.616 "num_base_bdevs_operational": 3, 00:17:14.616 "base_bdevs_list": [ 00:17:14.616 { 00:17:14.616 "name": "BaseBdev1", 00:17:14.616 "uuid": "bef96b16-b9de-4693-bbf7-e292c91ad519", 00:17:14.616 "is_configured": true, 00:17:14.616 "data_offset": 2048, 00:17:14.616 "data_size": 63488 00:17:14.616 }, 00:17:14.617 { 00:17:14.617 "name": "BaseBdev2", 00:17:14.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.617 "is_configured": false, 00:17:14.617 "data_offset": 0, 00:17:14.617 "data_size": 0 00:17:14.617 }, 00:17:14.617 { 00:17:14.617 "name": "BaseBdev3", 00:17:14.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.617 "is_configured": false, 00:17:14.617 "data_offset": 0, 00:17:14.617 "data_size": 0 00:17:14.617 } 00:17:14.617 ] 00:17:14.617 }' 00:17:14.617 04:59:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.617 04:59:44 -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 04:59:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:15.442 [2024-04-27 04:59:45.257990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.442 BaseBdev2 00:17:15.442 04:59:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:15.442 04:59:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:15.442 04:59:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:15.442 04:59:45 -- common/autotest_common.sh@889 -- # local i 00:17:15.442 04:59:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:15.442 04:59:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:15.442 04:59:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.699 04:59:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.957 [ 00:17:15.957 { 00:17:15.957 "name": "BaseBdev2", 00:17:15.957 "aliases": [ 00:17:15.957 "e0889de5-828b-4c93-926f-8551a95395fb" 00:17:15.957 ], 00:17:15.957 "product_name": "Malloc disk", 00:17:15.957 "block_size": 512, 00:17:15.957 "num_blocks": 65536, 00:17:15.957 "uuid": "e0889de5-828b-4c93-926f-8551a95395fb", 00:17:15.957 "assigned_rate_limits": { 00:17:15.957 "rw_ios_per_sec": 0, 00:17:15.957 "rw_mbytes_per_sec": 0, 00:17:15.957 "r_mbytes_per_sec": 0, 00:17:15.957 "w_mbytes_per_sec": 0 00:17:15.957 }, 00:17:15.957 "claimed": true, 00:17:15.957 "claim_type": "exclusive_write", 00:17:15.957 "zoned": false, 00:17:15.957 "supported_io_types": { 00:17:15.957 "read": true, 00:17:15.957 "write": true, 00:17:15.957 "unmap": true, 00:17:15.957 "write_zeroes": true, 00:17:15.957 "flush": true, 00:17:15.957 "reset": true, 00:17:15.957 "compare": false, 00:17:15.957 "compare_and_write": false, 00:17:15.957 "abort": true, 00:17:15.957 "nvme_admin": false, 00:17:15.957 "nvme_io": false 00:17:15.957 }, 00:17:15.957 "memory_domains": [ 00:17:15.957 { 00:17:15.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.957 "dma_device_type": 2 00:17:15.957 } 00:17:15.957 ], 00:17:15.957 "driver_specific": {} 00:17:15.957 } 00:17:15.957 ] 00:17:15.957 04:59:45 -- common/autotest_common.sh@895 -- # return 0 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.957 04:59:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.523 04:59:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.523 "name": "Existed_Raid", 00:17:16.523 "uuid": "a593ba31-9087-47cc-8bca-14f5a506d1c1", 00:17:16.523 "strip_size_kb": 64, 00:17:16.523 "state": "configuring", 00:17:16.523 "raid_level": "raid0", 00:17:16.523 "superblock": true, 00:17:16.523 "num_base_bdevs": 3, 00:17:16.523 "num_base_bdevs_discovered": 2, 00:17:16.523 "num_base_bdevs_operational": 3, 00:17:16.523 "base_bdevs_list": [ 00:17:16.523 { 00:17:16.523 "name": "BaseBdev1", 00:17:16.523 "uuid": "bef96b16-b9de-4693-bbf7-e292c91ad519", 00:17:16.523 "is_configured": true, 00:17:16.523 "data_offset": 2048, 00:17:16.523 "data_size": 63488 00:17:16.523 }, 00:17:16.523 { 00:17:16.523 "name": "BaseBdev2", 00:17:16.523 "uuid": "e0889de5-828b-4c93-926f-8551a95395fb", 00:17:16.523 "is_configured": true, 00:17:16.523 "data_offset": 2048, 00:17:16.523 "data_size": 63488 00:17:16.523 }, 00:17:16.523 { 00:17:16.523 "name": "BaseBdev3", 00:17:16.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.523 "is_configured": false, 00:17:16.523 "data_offset": 0, 00:17:16.523 "data_size": 0 00:17:16.523 } 00:17:16.523 ] 00:17:16.523 }' 00:17:16.523 04:59:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.523 04:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:17.088 04:59:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:17.346 [2024-04-27 04:59:47.013576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.346 [2024-04-27 04:59:47.013876] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:17.346 [2024-04-27 04:59:47.013893] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:17.346 [2024-04-27 04:59:47.014083] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:17.346 [2024-04-27 04:59:47.014553] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:17.346 [2024-04-27 04:59:47.014587] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:17.346 [2024-04-27 04:59:47.014764] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.346 BaseBdev3 00:17:17.346 04:59:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:17.346 04:59:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:17.346 04:59:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:17.346 04:59:47 -- common/autotest_common.sh@889 -- # local i 00:17:17.346 04:59:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:17.346 04:59:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:17.346 04:59:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.606 04:59:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:17.864 [ 00:17:17.864 { 00:17:17.864 "name": "BaseBdev3", 00:17:17.865 "aliases": [ 00:17:17.865 "ca46a9e9-1189-4ecd-aac4-bb3afccd80d5" 00:17:17.865 ], 00:17:17.865 "product_name": "Malloc disk", 00:17:17.865 "block_size": 512, 00:17:17.865 "num_blocks": 65536, 00:17:17.865 "uuid": "ca46a9e9-1189-4ecd-aac4-bb3afccd80d5", 00:17:17.865 "assigned_rate_limits": { 00:17:17.865 "rw_ios_per_sec": 0, 00:17:17.865 "rw_mbytes_per_sec": 0, 00:17:17.865 "r_mbytes_per_sec": 0, 00:17:17.865 "w_mbytes_per_sec": 0 00:17:17.865 }, 00:17:17.865 "claimed": true, 00:17:17.865 "claim_type": "exclusive_write", 00:17:17.865 "zoned": false, 00:17:17.865 "supported_io_types": { 00:17:17.865 "read": true, 00:17:17.865 "write": true, 00:17:17.865 "unmap": true, 00:17:17.865 "write_zeroes": true, 00:17:17.865 "flush": true, 00:17:17.865 "reset": true, 00:17:17.865 "compare": false, 00:17:17.865 "compare_and_write": false, 00:17:17.865 "abort": true, 00:17:17.865 "nvme_admin": false, 00:17:17.865 "nvme_io": false 00:17:17.865 }, 00:17:17.865 "memory_domains": [ 00:17:17.865 { 00:17:17.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.865 "dma_device_type": 2 00:17:17.865 } 00:17:17.865 ], 00:17:17.865 "driver_specific": {} 00:17:17.865 } 00:17:17.865 ] 00:17:17.865 04:59:47 -- common/autotest_common.sh@895 -- # return 0 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.865 04:59:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.123 04:59:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.123 "name": "Existed_Raid", 00:17:18.123 "uuid": "a593ba31-9087-47cc-8bca-14f5a506d1c1", 00:17:18.123 "strip_size_kb": 64, 00:17:18.123 "state": "online", 00:17:18.123 "raid_level": "raid0", 00:17:18.123 "superblock": true, 00:17:18.123 "num_base_bdevs": 3, 00:17:18.123 "num_base_bdevs_discovered": 3, 00:17:18.123 "num_base_bdevs_operational": 3, 00:17:18.123 "base_bdevs_list": [ 00:17:18.123 { 00:17:18.123 "name": "BaseBdev1", 00:17:18.123 "uuid": "bef96b16-b9de-4693-bbf7-e292c91ad519", 00:17:18.123 "is_configured": true, 00:17:18.123 "data_offset": 2048, 00:17:18.123 "data_size": 63488 00:17:18.123 }, 00:17:18.123 { 00:17:18.123 "name": "BaseBdev2", 00:17:18.123 "uuid": "e0889de5-828b-4c93-926f-8551a95395fb", 00:17:18.123 "is_configured": true, 00:17:18.123 "data_offset": 2048, 00:17:18.123 "data_size": 63488 00:17:18.123 }, 00:17:18.123 { 00:17:18.123 "name": "BaseBdev3", 00:17:18.123 "uuid": "ca46a9e9-1189-4ecd-aac4-bb3afccd80d5", 00:17:18.123 "is_configured": true, 00:17:18.124 "data_offset": 2048, 00:17:18.124 "data_size": 63488 00:17:18.124 } 00:17:18.124 ] 00:17:18.124 }' 00:17:18.124 04:59:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.124 04:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 04:59:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:18.948 [2024-04-27 04:59:48.762254] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:18.948 [2024-04-27 04:59:48.762313] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.948 [2024-04-27 04:59:48.762410] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.948 04:59:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.207 04:59:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.207 "name": "Existed_Raid", 00:17:19.207 "uuid": "a593ba31-9087-47cc-8bca-14f5a506d1c1", 00:17:19.207 "strip_size_kb": 64, 00:17:19.207 "state": "offline", 00:17:19.207 "raid_level": "raid0", 00:17:19.207 "superblock": true, 00:17:19.207 "num_base_bdevs": 3, 00:17:19.207 "num_base_bdevs_discovered": 2, 00:17:19.207 "num_base_bdevs_operational": 2, 00:17:19.207 "base_bdevs_list": [ 00:17:19.207 { 00:17:19.207 "name": null, 00:17:19.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.207 "is_configured": false, 00:17:19.207 "data_offset": 2048, 00:17:19.207 "data_size": 63488 00:17:19.207 }, 00:17:19.207 { 00:17:19.207 "name": "BaseBdev2", 00:17:19.207 "uuid": "e0889de5-828b-4c93-926f-8551a95395fb", 00:17:19.207 "is_configured": true, 00:17:19.207 "data_offset": 2048, 00:17:19.207 "data_size": 63488 00:17:19.207 }, 00:17:19.207 { 00:17:19.207 "name": "BaseBdev3", 00:17:19.207 "uuid": "ca46a9e9-1189-4ecd-aac4-bb3afccd80d5", 00:17:19.207 "is_configured": true, 00:17:19.207 "data_offset": 2048, 00:17:19.207 "data_size": 63488 00:17:19.207 } 00:17:19.207 ] 00:17:19.207 }' 00:17:19.207 04:59:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.207 04:59:49 -- common/autotest_common.sh@10 -- # set +x 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.142 04:59:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:20.399 [2024-04-27 04:59:50.205283] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.399 04:59:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.399 04:59:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.399 04:59:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.399 04:59:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.657 04:59:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.657 04:59:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.657 04:59:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:20.914 [2024-04-27 04:59:50.737539] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:20.914 [2024-04-27 04:59:50.737905] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:20.914 04:59:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.914 04:59:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.914 04:59:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.914 04:59:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.173 04:59:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:21.173 04:59:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:21.173 04:59:51 -- bdev/bdev_raid.sh@287 -- # killprocess 127504 00:17:21.173 04:59:51 -- common/autotest_common.sh@926 -- # '[' -z 127504 ']' 00:17:21.173 04:59:51 -- common/autotest_common.sh@930 -- # kill -0 127504 00:17:21.173 04:59:51 -- common/autotest_common.sh@931 -- # uname 00:17:21.173 04:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.173 04:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127504 00:17:21.173 04:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.173 04:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.173 04:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127504' 00:17:21.173 killing process with pid 127504 00:17:21.173 04:59:51 -- common/autotest_common.sh@945 -- # kill 127504 00:17:21.173 04:59:51 -- common/autotest_common.sh@950 -- # wait 127504 00:17:21.173 [2024-04-27 04:59:51.039735] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.173 [2024-04-27 04:59:51.039847] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:21.739 00:17:21.739 real 0m13.156s 00:17:21.739 user 0m24.018s 00:17:21.739 sys 0m1.718s 00:17:21.739 04:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.739 04:59:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 ************************************ 00:17:21.739 END TEST raid_state_function_test_sb 00:17:21.739 ************************************ 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:21.739 04:59:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:21.739 04:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.739 04:59:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 ************************************ 00:17:21.739 START TEST raid_superblock_test 00:17:21.739 ************************************ 00:17:21.739 04:59:51 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=127896 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:21.739 04:59:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127896 /var/tmp/spdk-raid.sock 00:17:21.739 04:59:51 -- common/autotest_common.sh@819 -- # '[' -z 127896 ']' 00:17:21.739 04:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.739 04:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.739 04:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.739 04:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.739 04:59:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 [2024-04-27 04:59:51.498158] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:21.739 [2024-04-27 04:59:51.498390] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127896 ] 00:17:21.997 [2024-04-27 04:59:51.660508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.997 [2024-04-27 04:59:51.786024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.997 [2024-04-27 04:59:51.866897] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.562 04:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.562 04:59:52 -- common/autotest_common.sh@852 -- # return 0 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.562 04:59:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:22.820 malloc1 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.077 [2024-04-27 04:59:52.939510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.077 [2024-04-27 04:59:52.939663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.077 [2024-04-27 04:59:52.939719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:23.077 [2024-04-27 04:59:52.939797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.077 [2024-04-27 04:59:52.942869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.077 [2024-04-27 04:59:52.942935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.077 pt1 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.077 04:59:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:23.335 malloc2 00:17:23.335 04:59:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.596 [2024-04-27 04:59:53.418592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.596 [2024-04-27 04:59:53.418709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.596 [2024-04-27 04:59:53.418766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:23.596 [2024-04-27 04:59:53.418845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.596 [2024-04-27 04:59:53.421692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.596 [2024-04-27 04:59:53.421750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.596 pt2 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.596 04:59:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:23.860 malloc3 00:17:23.860 04:59:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:24.117 [2024-04-27 04:59:53.911895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:24.117 [2024-04-27 04:59:53.912014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.118 [2024-04-27 04:59:53.912075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:24.118 [2024-04-27 04:59:53.912130] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.118 [2024-04-27 04:59:53.914966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.118 [2024-04-27 04:59:53.915030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:24.118 pt3 00:17:24.118 04:59:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:24.118 04:59:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:24.118 04:59:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:24.375 [2024-04-27 04:59:54.148006] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.375 [2024-04-27 04:59:54.150584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.375 [2024-04-27 04:59:54.150679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:24.375 [2024-04-27 04:59:54.150940] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:24.375 [2024-04-27 04:59:54.150971] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:24.375 [2024-04-27 04:59:54.151178] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:24.375 [2024-04-27 04:59:54.151733] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:24.375 [2024-04-27 04:59:54.151761] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:24.375 [2024-04-27 04:59:54.152038] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.375 04:59:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.376 04:59:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.633 04:59:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.633 "name": "raid_bdev1", 00:17:24.633 "uuid": "de129a0e-88ab-49a3-a317-e35ccc05ca92", 00:17:24.633 "strip_size_kb": 64, 00:17:24.633 "state": "online", 00:17:24.633 "raid_level": "raid0", 00:17:24.633 "superblock": true, 00:17:24.633 "num_base_bdevs": 3, 00:17:24.633 "num_base_bdevs_discovered": 3, 00:17:24.634 "num_base_bdevs_operational": 3, 00:17:24.634 "base_bdevs_list": [ 00:17:24.634 { 00:17:24.634 "name": "pt1", 00:17:24.634 "uuid": "9fc21884-40d1-56e2-b8f9-872b19788c82", 00:17:24.634 "is_configured": true, 00:17:24.634 "data_offset": 2048, 00:17:24.634 "data_size": 63488 00:17:24.634 }, 00:17:24.634 { 00:17:24.634 "name": "pt2", 00:17:24.634 "uuid": "55f056db-dc8b-5b40-83df-a3620d38abe8", 00:17:24.634 "is_configured": true, 00:17:24.634 "data_offset": 2048, 00:17:24.634 "data_size": 63488 00:17:24.634 }, 00:17:24.634 { 00:17:24.634 "name": "pt3", 00:17:24.634 "uuid": "1eaf074e-87ef-57f3-a93b-b0e7c279d5f5", 00:17:24.634 "is_configured": true, 00:17:24.634 "data_offset": 2048, 00:17:24.634 "data_size": 63488 00:17:24.634 } 00:17:24.634 ] 00:17:24.634 }' 00:17:24.634 04:59:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.634 04:59:54 -- common/autotest_common.sh@10 -- # set +x 00:17:25.568 04:59:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:25.568 04:59:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:25.568 [2024-04-27 04:59:55.360616] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.568 04:59:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=de129a0e-88ab-49a3-a317-e35ccc05ca92 00:17:25.568 04:59:55 -- bdev/bdev_raid.sh@380 -- # '[' -z de129a0e-88ab-49a3-a317-e35ccc05ca92 ']' 00:17:25.568 04:59:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:25.827 [2024-04-27 04:59:55.596378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.827 [2024-04-27 04:59:55.596438] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.827 [2024-04-27 04:59:55.596610] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.827 [2024-04-27 04:59:55.596713] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.827 [2024-04-27 04:59:55.596728] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:25.827 04:59:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.827 04:59:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:26.085 04:59:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:26.085 04:59:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:26.085 04:59:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.085 04:59:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:26.344 04:59:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.344 04:59:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:26.602 04:59:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:26.602 04:59:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:26.861 04:59:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:26.861 04:59:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:27.119 04:59:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:27.119 04:59:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.119 04:59:56 -- common/autotest_common.sh@640 -- # local es=0 00:17:27.119 04:59:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.119 04:59:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.119 04:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.119 04:59:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.119 04:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.119 04:59:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.119 04:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.119 04:59:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.119 04:59:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:27.119 04:59:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:27.378 [2024-04-27 04:59:57.108723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:27.378 [2024-04-27 04:59:57.111260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:27.378 [2024-04-27 04:59:57.111333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:27.378 [2024-04-27 04:59:57.111404] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:27.378 [2024-04-27 04:59:57.111517] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:27.378 [2024-04-27 04:59:57.111579] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:27.378 [2024-04-27 04:59:57.111666] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.378 [2024-04-27 04:59:57.111684] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:27.378 request: 00:17:27.378 { 00:17:27.378 "name": "raid_bdev1", 00:17:27.378 "raid_level": "raid0", 00:17:27.378 "base_bdevs": [ 00:17:27.378 "malloc1", 00:17:27.378 "malloc2", 00:17:27.378 "malloc3" 00:17:27.378 ], 00:17:27.378 "superblock": false, 00:17:27.378 "strip_size_kb": 64, 00:17:27.378 "method": "bdev_raid_create", 00:17:27.378 "req_id": 1 00:17:27.378 } 00:17:27.378 Got JSON-RPC error response 00:17:27.378 response: 00:17:27.378 { 00:17:27.378 "code": -17, 00:17:27.378 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:27.378 } 00:17:27.378 04:59:57 -- common/autotest_common.sh@643 -- # es=1 00:17:27.378 04:59:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:27.378 04:59:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:27.378 04:59:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:27.378 04:59:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.378 04:59:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:27.636 04:59:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:27.636 04:59:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:27.636 04:59:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.894 [2024-04-27 04:59:57.592750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.894 [2024-04-27 04:59:57.592875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.894 [2024-04-27 04:59:57.592933] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:27.894 [2024-04-27 04:59:57.592967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.894 [2024-04-27 04:59:57.595822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.894 [2024-04-27 04:59:57.595883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.894 [2024-04-27 04:59:57.596020] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:27.894 [2024-04-27 04:59:57.596083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.894 pt1 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.894 04:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.895 04:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.153 04:59:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.153 "name": "raid_bdev1", 00:17:28.153 "uuid": "de129a0e-88ab-49a3-a317-e35ccc05ca92", 00:17:28.153 "strip_size_kb": 64, 00:17:28.153 "state": "configuring", 00:17:28.153 "raid_level": "raid0", 00:17:28.153 "superblock": true, 00:17:28.153 "num_base_bdevs": 3, 00:17:28.153 "num_base_bdevs_discovered": 1, 00:17:28.153 "num_base_bdevs_operational": 3, 00:17:28.153 "base_bdevs_list": [ 00:17:28.153 { 00:17:28.153 "name": "pt1", 00:17:28.153 "uuid": "9fc21884-40d1-56e2-b8f9-872b19788c82", 00:17:28.153 "is_configured": true, 00:17:28.153 "data_offset": 2048, 00:17:28.153 "data_size": 63488 00:17:28.153 }, 00:17:28.153 { 00:17:28.153 "name": null, 00:17:28.153 "uuid": "55f056db-dc8b-5b40-83df-a3620d38abe8", 00:17:28.153 "is_configured": false, 00:17:28.153 "data_offset": 2048, 00:17:28.153 "data_size": 63488 00:17:28.153 }, 00:17:28.153 { 00:17:28.153 "name": null, 00:17:28.153 "uuid": "1eaf074e-87ef-57f3-a93b-b0e7c279d5f5", 00:17:28.153 "is_configured": false, 00:17:28.153 "data_offset": 2048, 00:17:28.153 "data_size": 63488 00:17:28.153 } 00:17:28.153 ] 00:17:28.153 }' 00:17:28.153 04:59:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.153 04:59:57 -- common/autotest_common.sh@10 -- # set +x 00:17:28.720 04:59:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:28.720 04:59:58 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.980 [2024-04-27 04:59:58.785324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.980 [2024-04-27 04:59:58.785473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.980 [2024-04-27 04:59:58.785537] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:28.980 [2024-04-27 04:59:58.785563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.980 [2024-04-27 04:59:58.786164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.980 [2024-04-27 04:59:58.786221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.980 [2024-04-27 04:59:58.786384] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:28.980 [2024-04-27 04:59:58.786418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.980 pt2 00:17:28.980 04:59:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:29.239 [2024-04-27 04:59:59.061476] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.239 04:59:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.498 04:59:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.498 "name": "raid_bdev1", 00:17:29.498 "uuid": "de129a0e-88ab-49a3-a317-e35ccc05ca92", 00:17:29.498 "strip_size_kb": 64, 00:17:29.498 "state": "configuring", 00:17:29.498 "raid_level": "raid0", 00:17:29.498 "superblock": true, 00:17:29.498 "num_base_bdevs": 3, 00:17:29.498 "num_base_bdevs_discovered": 1, 00:17:29.498 "num_base_bdevs_operational": 3, 00:17:29.498 "base_bdevs_list": [ 00:17:29.498 { 00:17:29.498 "name": "pt1", 00:17:29.498 "uuid": "9fc21884-40d1-56e2-b8f9-872b19788c82", 00:17:29.498 "is_configured": true, 00:17:29.498 "data_offset": 2048, 00:17:29.498 "data_size": 63488 00:17:29.498 }, 00:17:29.498 { 00:17:29.498 "name": null, 00:17:29.498 "uuid": "55f056db-dc8b-5b40-83df-a3620d38abe8", 00:17:29.498 "is_configured": false, 00:17:29.498 "data_offset": 2048, 00:17:29.498 "data_size": 63488 00:17:29.498 }, 00:17:29.498 { 00:17:29.498 "name": null, 00:17:29.498 "uuid": "1eaf074e-87ef-57f3-a93b-b0e7c279d5f5", 00:17:29.498 "is_configured": false, 00:17:29.498 "data_offset": 2048, 00:17:29.498 "data_size": 63488 00:17:29.498 } 00:17:29.498 ] 00:17:29.498 }' 00:17:29.498 04:59:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.498 04:59:59 -- common/autotest_common.sh@10 -- # set +x 00:17:30.433 04:59:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:30.433 04:59:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:30.433 04:59:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.433 [2024-04-27 05:00:00.237661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.433 [2024-04-27 05:00:00.237810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.433 [2024-04-27 05:00:00.237883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:30.433 [2024-04-27 05:00:00.237920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.433 [2024-04-27 05:00:00.238550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.433 [2024-04-27 05:00:00.238610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.433 [2024-04-27 05:00:00.238734] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:30.433 [2024-04-27 05:00:00.238777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.433 pt2 00:17:30.433 05:00:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:30.433 05:00:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:30.433 05:00:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:30.691 [2024-04-27 05:00:00.509739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:30.691 [2024-04-27 05:00:00.509862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.691 [2024-04-27 05:00:00.509914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:30.691 [2024-04-27 05:00:00.509949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.691 [2024-04-27 05:00:00.510545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.691 [2024-04-27 05:00:00.510610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:30.691 [2024-04-27 05:00:00.510763] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:30.691 [2024-04-27 05:00:00.510797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:30.691 [2024-04-27 05:00:00.510984] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:30.691 [2024-04-27 05:00:00.511014] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:30.691 [2024-04-27 05:00:00.511132] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:30.691 [2024-04-27 05:00:00.511533] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:30.691 [2024-04-27 05:00:00.511561] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:30.691 [2024-04-27 05:00:00.511684] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.691 pt3 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.691 05:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.949 05:00:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.949 "name": "raid_bdev1", 00:17:30.949 "uuid": "de129a0e-88ab-49a3-a317-e35ccc05ca92", 00:17:30.949 "strip_size_kb": 64, 00:17:30.949 "state": "online", 00:17:30.949 "raid_level": "raid0", 00:17:30.949 "superblock": true, 00:17:30.949 "num_base_bdevs": 3, 00:17:30.949 "num_base_bdevs_discovered": 3, 00:17:30.949 "num_base_bdevs_operational": 3, 00:17:30.949 "base_bdevs_list": [ 00:17:30.949 { 00:17:30.949 "name": "pt1", 00:17:30.949 "uuid": "9fc21884-40d1-56e2-b8f9-872b19788c82", 00:17:30.949 "is_configured": true, 00:17:30.949 "data_offset": 2048, 00:17:30.949 "data_size": 63488 00:17:30.949 }, 00:17:30.949 { 00:17:30.949 "name": "pt2", 00:17:30.949 "uuid": "55f056db-dc8b-5b40-83df-a3620d38abe8", 00:17:30.949 "is_configured": true, 00:17:30.949 "data_offset": 2048, 00:17:30.949 "data_size": 63488 00:17:30.949 }, 00:17:30.949 { 00:17:30.949 "name": "pt3", 00:17:30.949 "uuid": "1eaf074e-87ef-57f3-a93b-b0e7c279d5f5", 00:17:30.949 "is_configured": true, 00:17:30.949 "data_offset": 2048, 00:17:30.949 "data_size": 63488 00:17:30.949 } 00:17:30.949 ] 00:17:30.949 }' 00:17:30.949 05:00:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.949 05:00:00 -- common/autotest_common.sh@10 -- # set +x 00:17:31.880 05:00:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:31.881 [2024-04-27 05:00:01.666306] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@430 -- # '[' de129a0e-88ab-49a3-a317-e35ccc05ca92 '!=' de129a0e-88ab-49a3-a317-e35ccc05ca92 ']' 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:31.881 05:00:01 -- bdev/bdev_raid.sh@511 -- # killprocess 127896 00:17:31.881 05:00:01 -- common/autotest_common.sh@926 -- # '[' -z 127896 ']' 00:17:31.881 05:00:01 -- common/autotest_common.sh@930 -- # kill -0 127896 00:17:31.881 05:00:01 -- common/autotest_common.sh@931 -- # uname 00:17:31.881 05:00:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:31.881 05:00:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127896 00:17:31.881 05:00:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:31.881 05:00:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:31.881 05:00:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127896' 00:17:31.881 killing process with pid 127896 00:17:31.881 05:00:01 -- common/autotest_common.sh@945 -- # kill 127896 00:17:31.881 05:00:01 -- common/autotest_common.sh@950 -- # wait 127896 00:17:31.881 [2024-04-27 05:00:01.714802] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.881 [2024-04-27 05:00:01.714949] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.881 [2024-04-27 05:00:01.715039] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.881 [2024-04-27 05:00:01.715066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:31.881 [2024-04-27 05:00:01.773434] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:32.446 00:17:32.446 real 0m10.676s 00:17:32.446 user 0m19.213s 00:17:32.446 sys 0m1.518s 00:17:32.446 05:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.446 05:00:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 ************************************ 00:17:32.446 END TEST raid_superblock_test 00:17:32.446 ************************************ 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:32.446 05:00:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:32.446 05:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:32.446 05:00:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 ************************************ 00:17:32.446 START TEST raid_state_function_test 00:17:32.446 ************************************ 00:17:32.446 05:00:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=128214 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128214' 00:17:32.446 Process raid pid: 128214 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:32.446 05:00:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128214 /var/tmp/spdk-raid.sock 00:17:32.446 05:00:02 -- common/autotest_common.sh@819 -- # '[' -z 128214 ']' 00:17:32.446 05:00:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:32.446 05:00:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:32.446 05:00:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:32.446 05:00:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.446 05:00:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 [2024-04-27 05:00:02.244593] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:32.446 [2024-04-27 05:00:02.244881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.703 [2024-04-27 05:00:02.414388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.703 [2024-04-27 05:00:02.542273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.961 [2024-04-27 05:00:02.624664] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.540 05:00:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:33.540 05:00:03 -- common/autotest_common.sh@852 -- # return 0 00:17:33.540 05:00:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.798 [2024-04-27 05:00:03.498051] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.798 [2024-04-27 05:00:03.498176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.798 [2024-04-27 05:00:03.498194] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.798 [2024-04-27 05:00:03.498218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.798 [2024-04-27 05:00:03.498228] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.798 [2024-04-27 05:00:03.498281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.798 05:00:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.056 05:00:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.056 "name": "Existed_Raid", 00:17:34.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.056 "strip_size_kb": 64, 00:17:34.056 "state": "configuring", 00:17:34.056 "raid_level": "concat", 00:17:34.056 "superblock": false, 00:17:34.056 "num_base_bdevs": 3, 00:17:34.056 "num_base_bdevs_discovered": 0, 00:17:34.056 "num_base_bdevs_operational": 3, 00:17:34.056 "base_bdevs_list": [ 00:17:34.056 { 00:17:34.056 "name": "BaseBdev1", 00:17:34.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.056 "is_configured": false, 00:17:34.056 "data_offset": 0, 00:17:34.056 "data_size": 0 00:17:34.056 }, 00:17:34.056 { 00:17:34.056 "name": "BaseBdev2", 00:17:34.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.056 "is_configured": false, 00:17:34.057 "data_offset": 0, 00:17:34.057 "data_size": 0 00:17:34.057 }, 00:17:34.057 { 00:17:34.057 "name": "BaseBdev3", 00:17:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.057 "is_configured": false, 00:17:34.057 "data_offset": 0, 00:17:34.057 "data_size": 0 00:17:34.057 } 00:17:34.057 ] 00:17:34.057 }' 00:17:34.057 05:00:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.057 05:00:03 -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 05:00:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.912 [2024-04-27 05:00:04.702189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.912 [2024-04-27 05:00:04.702259] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:34.912 05:00:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.179 [2024-04-27 05:00:04.986282] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.179 [2024-04-27 05:00:04.986393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.179 [2024-04-27 05:00:04.986409] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.179 [2024-04-27 05:00:04.986444] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.179 [2024-04-27 05:00:04.986454] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.179 [2024-04-27 05:00:04.986486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.179 05:00:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.436 [2024-04-27 05:00:05.277246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.437 BaseBdev1 00:17:35.437 05:00:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:35.437 05:00:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:35.437 05:00:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:35.437 05:00:05 -- common/autotest_common.sh@889 -- # local i 00:17:35.437 05:00:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:35.437 05:00:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:35.437 05:00:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.694 05:00:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.953 [ 00:17:35.954 { 00:17:35.954 "name": "BaseBdev1", 00:17:35.954 "aliases": [ 00:17:35.954 "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d" 00:17:35.954 ], 00:17:35.954 "product_name": "Malloc disk", 00:17:35.954 "block_size": 512, 00:17:35.954 "num_blocks": 65536, 00:17:35.954 "uuid": "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d", 00:17:35.954 "assigned_rate_limits": { 00:17:35.954 "rw_ios_per_sec": 0, 00:17:35.954 "rw_mbytes_per_sec": 0, 00:17:35.954 "r_mbytes_per_sec": 0, 00:17:35.954 "w_mbytes_per_sec": 0 00:17:35.954 }, 00:17:35.954 "claimed": true, 00:17:35.954 "claim_type": "exclusive_write", 00:17:35.954 "zoned": false, 00:17:35.954 "supported_io_types": { 00:17:35.954 "read": true, 00:17:35.954 "write": true, 00:17:35.954 "unmap": true, 00:17:35.954 "write_zeroes": true, 00:17:35.954 "flush": true, 00:17:35.954 "reset": true, 00:17:35.954 "compare": false, 00:17:35.954 "compare_and_write": false, 00:17:35.954 "abort": true, 00:17:35.954 "nvme_admin": false, 00:17:35.954 "nvme_io": false 00:17:35.954 }, 00:17:35.954 "memory_domains": [ 00:17:35.954 { 00:17:35.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.954 "dma_device_type": 2 00:17:35.954 } 00:17:35.954 ], 00:17:35.954 "driver_specific": {} 00:17:35.954 } 00:17:35.954 ] 00:17:35.954 05:00:05 -- common/autotest_common.sh@895 -- # return 0 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.954 05:00:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.212 05:00:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.212 "name": "Existed_Raid", 00:17:36.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.212 "strip_size_kb": 64, 00:17:36.212 "state": "configuring", 00:17:36.212 "raid_level": "concat", 00:17:36.212 "superblock": false, 00:17:36.212 "num_base_bdevs": 3, 00:17:36.212 "num_base_bdevs_discovered": 1, 00:17:36.212 "num_base_bdevs_operational": 3, 00:17:36.212 "base_bdevs_list": [ 00:17:36.212 { 00:17:36.212 "name": "BaseBdev1", 00:17:36.212 "uuid": "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d", 00:17:36.212 "is_configured": true, 00:17:36.213 "data_offset": 0, 00:17:36.213 "data_size": 65536 00:17:36.213 }, 00:17:36.213 { 00:17:36.213 "name": "BaseBdev2", 00:17:36.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.213 "is_configured": false, 00:17:36.213 "data_offset": 0, 00:17:36.213 "data_size": 0 00:17:36.213 }, 00:17:36.213 { 00:17:36.213 "name": "BaseBdev3", 00:17:36.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.213 "is_configured": false, 00:17:36.213 "data_offset": 0, 00:17:36.213 "data_size": 0 00:17:36.213 } 00:17:36.213 ] 00:17:36.213 }' 00:17:36.213 05:00:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.213 05:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:37.147 05:00:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:37.147 [2024-04-27 05:00:06.913766] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.147 [2024-04-27 05:00:06.913875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:37.147 05:00:06 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:37.147 05:00:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:37.407 [2024-04-27 05:00:07.181968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.407 [2024-04-27 05:00:07.184586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.407 [2024-04-27 05:00:07.184672] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.407 [2024-04-27 05:00:07.184688] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.407 [2024-04-27 05:00:07.184721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.407 05:00:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.666 05:00:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.666 "name": "Existed_Raid", 00:17:37.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.666 "strip_size_kb": 64, 00:17:37.666 "state": "configuring", 00:17:37.666 "raid_level": "concat", 00:17:37.666 "superblock": false, 00:17:37.666 "num_base_bdevs": 3, 00:17:37.666 "num_base_bdevs_discovered": 1, 00:17:37.666 "num_base_bdevs_operational": 3, 00:17:37.666 "base_bdevs_list": [ 00:17:37.666 { 00:17:37.666 "name": "BaseBdev1", 00:17:37.666 "uuid": "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d", 00:17:37.666 "is_configured": true, 00:17:37.666 "data_offset": 0, 00:17:37.666 "data_size": 65536 00:17:37.666 }, 00:17:37.666 { 00:17:37.666 "name": "BaseBdev2", 00:17:37.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.666 "is_configured": false, 00:17:37.666 "data_offset": 0, 00:17:37.666 "data_size": 0 00:17:37.666 }, 00:17:37.666 { 00:17:37.666 "name": "BaseBdev3", 00:17:37.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.666 "is_configured": false, 00:17:37.666 "data_offset": 0, 00:17:37.666 "data_size": 0 00:17:37.666 } 00:17:37.666 ] 00:17:37.666 }' 00:17:37.666 05:00:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.666 05:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:38.243 05:00:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:38.807 [2024-04-27 05:00:08.410207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.807 BaseBdev2 00:17:38.807 05:00:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:38.807 05:00:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:38.807 05:00:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.807 05:00:08 -- common/autotest_common.sh@889 -- # local i 00:17:38.807 05:00:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.807 05:00:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.807 05:00:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.807 05:00:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.066 [ 00:17:39.066 { 00:17:39.066 "name": "BaseBdev2", 00:17:39.066 "aliases": [ 00:17:39.066 "e4f0185b-ffd1-4242-948b-99bb4f6875f8" 00:17:39.066 ], 00:17:39.066 "product_name": "Malloc disk", 00:17:39.066 "block_size": 512, 00:17:39.066 "num_blocks": 65536, 00:17:39.066 "uuid": "e4f0185b-ffd1-4242-948b-99bb4f6875f8", 00:17:39.066 "assigned_rate_limits": { 00:17:39.066 "rw_ios_per_sec": 0, 00:17:39.066 "rw_mbytes_per_sec": 0, 00:17:39.066 "r_mbytes_per_sec": 0, 00:17:39.066 "w_mbytes_per_sec": 0 00:17:39.066 }, 00:17:39.066 "claimed": true, 00:17:39.066 "claim_type": "exclusive_write", 00:17:39.066 "zoned": false, 00:17:39.066 "supported_io_types": { 00:17:39.066 "read": true, 00:17:39.066 "write": true, 00:17:39.066 "unmap": true, 00:17:39.066 "write_zeroes": true, 00:17:39.066 "flush": true, 00:17:39.066 "reset": true, 00:17:39.066 "compare": false, 00:17:39.066 "compare_and_write": false, 00:17:39.066 "abort": true, 00:17:39.066 "nvme_admin": false, 00:17:39.066 "nvme_io": false 00:17:39.066 }, 00:17:39.066 "memory_domains": [ 00:17:39.066 { 00:17:39.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.066 "dma_device_type": 2 00:17:39.066 } 00:17:39.066 ], 00:17:39.066 "driver_specific": {} 00:17:39.066 } 00:17:39.066 ] 00:17:39.066 05:00:08 -- common/autotest_common.sh@895 -- # return 0 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.066 05:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.323 05:00:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.323 "name": "Existed_Raid", 00:17:39.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.323 "strip_size_kb": 64, 00:17:39.323 "state": "configuring", 00:17:39.323 "raid_level": "concat", 00:17:39.323 "superblock": false, 00:17:39.323 "num_base_bdevs": 3, 00:17:39.323 "num_base_bdevs_discovered": 2, 00:17:39.323 "num_base_bdevs_operational": 3, 00:17:39.323 "base_bdevs_list": [ 00:17:39.323 { 00:17:39.323 "name": "BaseBdev1", 00:17:39.323 "uuid": "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 0, 00:17:39.323 "data_size": 65536 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev2", 00:17:39.323 "uuid": "e4f0185b-ffd1-4242-948b-99bb4f6875f8", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 0, 00:17:39.323 "data_size": 65536 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev3", 00:17:39.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.323 "is_configured": false, 00:17:39.323 "data_offset": 0, 00:17:39.323 "data_size": 0 00:17:39.323 } 00:17:39.323 ] 00:17:39.323 }' 00:17:39.323 05:00:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.323 05:00:09 -- common/autotest_common.sh@10 -- # set +x 00:17:40.254 05:00:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.254 [2024-04-27 05:00:10.074171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.254 [2024-04-27 05:00:10.074258] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:40.254 [2024-04-27 05:00:10.074271] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:40.254 [2024-04-27 05:00:10.074419] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:40.254 [2024-04-27 05:00:10.074884] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:40.254 [2024-04-27 05:00:10.074910] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:40.254 [2024-04-27 05:00:10.075217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.254 BaseBdev3 00:17:40.254 05:00:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:40.254 05:00:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:40.254 05:00:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:40.254 05:00:10 -- common/autotest_common.sh@889 -- # local i 00:17:40.254 05:00:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:40.254 05:00:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:40.254 05:00:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.512 05:00:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.770 [ 00:17:40.770 { 00:17:40.770 "name": "BaseBdev3", 00:17:40.770 "aliases": [ 00:17:40.770 "68f356e0-ba52-4753-8eec-49c517cab8b9" 00:17:40.770 ], 00:17:40.770 "product_name": "Malloc disk", 00:17:40.770 "block_size": 512, 00:17:40.770 "num_blocks": 65536, 00:17:40.770 "uuid": "68f356e0-ba52-4753-8eec-49c517cab8b9", 00:17:40.770 "assigned_rate_limits": { 00:17:40.770 "rw_ios_per_sec": 0, 00:17:40.770 "rw_mbytes_per_sec": 0, 00:17:40.770 "r_mbytes_per_sec": 0, 00:17:40.770 "w_mbytes_per_sec": 0 00:17:40.770 }, 00:17:40.770 "claimed": true, 00:17:40.770 "claim_type": "exclusive_write", 00:17:40.770 "zoned": false, 00:17:40.770 "supported_io_types": { 00:17:40.770 "read": true, 00:17:40.770 "write": true, 00:17:40.770 "unmap": true, 00:17:40.770 "write_zeroes": true, 00:17:40.770 "flush": true, 00:17:40.770 "reset": true, 00:17:40.770 "compare": false, 00:17:40.770 "compare_and_write": false, 00:17:40.770 "abort": true, 00:17:40.770 "nvme_admin": false, 00:17:40.770 "nvme_io": false 00:17:40.771 }, 00:17:40.771 "memory_domains": [ 00:17:40.771 { 00:17:40.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.771 "dma_device_type": 2 00:17:40.771 } 00:17:40.771 ], 00:17:40.771 "driver_specific": {} 00:17:40.771 } 00:17:40.771 ] 00:17:40.771 05:00:10 -- common/autotest_common.sh@895 -- # return 0 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.771 05:00:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.029 05:00:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.029 "name": "Existed_Raid", 00:17:41.029 "uuid": "f70c6d0b-15dd-4707-8886-7beff6c8c257", 00:17:41.029 "strip_size_kb": 64, 00:17:41.029 "state": "online", 00:17:41.029 "raid_level": "concat", 00:17:41.029 "superblock": false, 00:17:41.029 "num_base_bdevs": 3, 00:17:41.029 "num_base_bdevs_discovered": 3, 00:17:41.029 "num_base_bdevs_operational": 3, 00:17:41.029 "base_bdevs_list": [ 00:17:41.029 { 00:17:41.029 "name": "BaseBdev1", 00:17:41.029 "uuid": "a16e4a3a-36cb-49ed-a6e7-5e166bb8268d", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 }, 00:17:41.030 { 00:17:41.030 "name": "BaseBdev2", 00:17:41.030 "uuid": "e4f0185b-ffd1-4242-948b-99bb4f6875f8", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 }, 00:17:41.030 { 00:17:41.030 "name": "BaseBdev3", 00:17:41.030 "uuid": "68f356e0-ba52-4753-8eec-49c517cab8b9", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 } 00:17:41.030 ] 00:17:41.030 }' 00:17:41.030 05:00:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.030 05:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:41.593 05:00:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:42.159 [2024-04-27 05:00:11.749441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.159 [2024-04-27 05:00:11.749518] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.159 [2024-04-27 05:00:11.749621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.159 05:00:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.159 05:00:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.159 "name": "Existed_Raid", 00:17:42.159 "uuid": "f70c6d0b-15dd-4707-8886-7beff6c8c257", 00:17:42.159 "strip_size_kb": 64, 00:17:42.159 "state": "offline", 00:17:42.159 "raid_level": "concat", 00:17:42.159 "superblock": false, 00:17:42.159 "num_base_bdevs": 3, 00:17:42.159 "num_base_bdevs_discovered": 2, 00:17:42.159 "num_base_bdevs_operational": 2, 00:17:42.159 "base_bdevs_list": [ 00:17:42.159 { 00:17:42.159 "name": null, 00:17:42.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.159 "is_configured": false, 00:17:42.159 "data_offset": 0, 00:17:42.159 "data_size": 65536 00:17:42.159 }, 00:17:42.159 { 00:17:42.159 "name": "BaseBdev2", 00:17:42.159 "uuid": "e4f0185b-ffd1-4242-948b-99bb4f6875f8", 00:17:42.159 "is_configured": true, 00:17:42.159 "data_offset": 0, 00:17:42.159 "data_size": 65536 00:17:42.159 }, 00:17:42.159 { 00:17:42.159 "name": "BaseBdev3", 00:17:42.159 "uuid": "68f356e0-ba52-4753-8eec-49c517cab8b9", 00:17:42.159 "is_configured": true, 00:17:42.159 "data_offset": 0, 00:17:42.159 "data_size": 65536 00:17:42.159 } 00:17:42.159 ] 00:17:42.159 }' 00:17:42.159 05:00:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.159 05:00:12 -- common/autotest_common.sh@10 -- # set +x 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.098 05:00:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:43.356 [2024-04-27 05:00:13.172373] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.356 05:00:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:43.356 05:00:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:43.356 05:00:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.356 05:00:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:43.616 05:00:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:43.616 05:00:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.616 05:00:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:43.874 [2024-04-27 05:00:13.657190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:43.874 [2024-04-27 05:00:13.657297] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:43.874 05:00:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:43.874 05:00:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:43.874 05:00:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.874 05:00:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:44.133 05:00:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:44.133 05:00:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:44.133 05:00:13 -- bdev/bdev_raid.sh@287 -- # killprocess 128214 00:17:44.133 05:00:13 -- common/autotest_common.sh@926 -- # '[' -z 128214 ']' 00:17:44.133 05:00:13 -- common/autotest_common.sh@930 -- # kill -0 128214 00:17:44.133 05:00:13 -- common/autotest_common.sh@931 -- # uname 00:17:44.133 05:00:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:44.133 05:00:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128214 00:17:44.133 05:00:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:44.133 05:00:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:44.133 05:00:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128214' 00:17:44.133 killing process with pid 128214 00:17:44.133 05:00:13 -- common/autotest_common.sh@945 -- # kill 128214 00:17:44.133 05:00:13 -- common/autotest_common.sh@950 -- # wait 128214 00:17:44.133 [2024-04-27 05:00:13.970837] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.133 [2024-04-27 05:00:13.970950] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:44.700 00:17:44.700 real 0m12.168s 00:17:44.700 user 0m22.052s 00:17:44.700 sys 0m1.689s 00:17:44.700 05:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.700 05:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.700 ************************************ 00:17:44.700 END TEST raid_state_function_test 00:17:44.700 ************************************ 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:44.700 05:00:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:44.700 05:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:44.700 05:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.700 ************************************ 00:17:44.700 START TEST raid_state_function_test_sb 00:17:44.700 ************************************ 00:17:44.700 05:00:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=128594 00:17:44.700 Process raid pid: 128594 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128594' 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:44.700 05:00:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128594 /var/tmp/spdk-raid.sock 00:17:44.700 05:00:14 -- common/autotest_common.sh@819 -- # '[' -z 128594 ']' 00:17:44.700 05:00:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.700 05:00:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.700 05:00:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.700 05:00:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.700 05:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.700 [2024-04-27 05:00:14.473066] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:44.700 [2024-04-27 05:00:14.473336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.959 [2024-04-27 05:00:14.646230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.959 [2024-04-27 05:00:14.779045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.217 [2024-04-27 05:00:14.863862] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.783 05:00:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.783 05:00:15 -- common/autotest_common.sh@852 -- # return 0 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:45.783 [2024-04-27 05:00:15.634497] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.783 [2024-04-27 05:00:15.634629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.783 [2024-04-27 05:00:15.634646] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.783 [2024-04-27 05:00:15.634671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.783 [2024-04-27 05:00:15.634680] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.783 [2024-04-27 05:00:15.634734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.783 05:00:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.041 05:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.041 "name": "Existed_Raid", 00:17:46.041 "uuid": "aeaae757-ae11-4c16-bc0f-09a6746f12f2", 00:17:46.041 "strip_size_kb": 64, 00:17:46.041 "state": "configuring", 00:17:46.041 "raid_level": "concat", 00:17:46.041 "superblock": true, 00:17:46.041 "num_base_bdevs": 3, 00:17:46.041 "num_base_bdevs_discovered": 0, 00:17:46.041 "num_base_bdevs_operational": 3, 00:17:46.041 "base_bdevs_list": [ 00:17:46.041 { 00:17:46.041 "name": "BaseBdev1", 00:17:46.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.041 "is_configured": false, 00:17:46.041 "data_offset": 0, 00:17:46.041 "data_size": 0 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "name": "BaseBdev2", 00:17:46.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.041 "is_configured": false, 00:17:46.041 "data_offset": 0, 00:17:46.041 "data_size": 0 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "name": "BaseBdev3", 00:17:46.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.041 "is_configured": false, 00:17:46.041 "data_offset": 0, 00:17:46.041 "data_size": 0 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }' 00:17:46.041 05:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.041 05:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:46.974 05:00:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:46.974 [2024-04-27 05:00:16.778559] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.974 [2024-04-27 05:00:16.778636] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:46.974 05:00:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:47.233 [2024-04-27 05:00:17.006690] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.233 [2024-04-27 05:00:17.006803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.233 [2024-04-27 05:00:17.006820] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.233 [2024-04-27 05:00:17.006855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.233 [2024-04-27 05:00:17.006866] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.233 [2024-04-27 05:00:17.006899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.233 05:00:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.491 [2024-04-27 05:00:17.307063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.491 BaseBdev1 00:17:47.491 05:00:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:47.491 05:00:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:47.491 05:00:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:47.491 05:00:17 -- common/autotest_common.sh@889 -- # local i 00:17:47.491 05:00:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:47.491 05:00:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:47.491 05:00:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.749 05:00:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.008 [ 00:17:48.008 { 00:17:48.008 "name": "BaseBdev1", 00:17:48.008 "aliases": [ 00:17:48.008 "9a7b7bc9-af88-4983-95ad-ce5a09b87214" 00:17:48.008 ], 00:17:48.008 "product_name": "Malloc disk", 00:17:48.008 "block_size": 512, 00:17:48.008 "num_blocks": 65536, 00:17:48.008 "uuid": "9a7b7bc9-af88-4983-95ad-ce5a09b87214", 00:17:48.008 "assigned_rate_limits": { 00:17:48.008 "rw_ios_per_sec": 0, 00:17:48.008 "rw_mbytes_per_sec": 0, 00:17:48.008 "r_mbytes_per_sec": 0, 00:17:48.008 "w_mbytes_per_sec": 0 00:17:48.008 }, 00:17:48.008 "claimed": true, 00:17:48.008 "claim_type": "exclusive_write", 00:17:48.008 "zoned": false, 00:17:48.008 "supported_io_types": { 00:17:48.008 "read": true, 00:17:48.008 "write": true, 00:17:48.008 "unmap": true, 00:17:48.008 "write_zeroes": true, 00:17:48.008 "flush": true, 00:17:48.008 "reset": true, 00:17:48.008 "compare": false, 00:17:48.008 "compare_and_write": false, 00:17:48.008 "abort": true, 00:17:48.008 "nvme_admin": false, 00:17:48.008 "nvme_io": false 00:17:48.008 }, 00:17:48.008 "memory_domains": [ 00:17:48.008 { 00:17:48.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.008 "dma_device_type": 2 00:17:48.008 } 00:17:48.008 ], 00:17:48.008 "driver_specific": {} 00:17:48.008 } 00:17:48.008 ] 00:17:48.008 05:00:17 -- common/autotest_common.sh@895 -- # return 0 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.008 05:00:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.266 05:00:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.266 "name": "Existed_Raid", 00:17:48.266 "uuid": "e0f6ef23-e0f7-48d6-b3a6-29f99f06a107", 00:17:48.266 "strip_size_kb": 64, 00:17:48.266 "state": "configuring", 00:17:48.266 "raid_level": "concat", 00:17:48.266 "superblock": true, 00:17:48.266 "num_base_bdevs": 3, 00:17:48.266 "num_base_bdevs_discovered": 1, 00:17:48.266 "num_base_bdevs_operational": 3, 00:17:48.266 "base_bdevs_list": [ 00:17:48.266 { 00:17:48.266 "name": "BaseBdev1", 00:17:48.266 "uuid": "9a7b7bc9-af88-4983-95ad-ce5a09b87214", 00:17:48.266 "is_configured": true, 00:17:48.266 "data_offset": 2048, 00:17:48.266 "data_size": 63488 00:17:48.266 }, 00:17:48.266 { 00:17:48.266 "name": "BaseBdev2", 00:17:48.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.266 "is_configured": false, 00:17:48.266 "data_offset": 0, 00:17:48.266 "data_size": 0 00:17:48.266 }, 00:17:48.266 { 00:17:48.266 "name": "BaseBdev3", 00:17:48.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.266 "is_configured": false, 00:17:48.266 "data_offset": 0, 00:17:48.267 "data_size": 0 00:17:48.267 } 00:17:48.267 ] 00:17:48.267 }' 00:17:48.267 05:00:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.267 05:00:18 -- common/autotest_common.sh@10 -- # set +x 00:17:49.223 05:00:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.224 [2024-04-27 05:00:19.019602] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.224 [2024-04-27 05:00:19.019706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:49.224 05:00:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:49.224 05:00:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:49.495 05:00:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.754 BaseBdev1 00:17:49.754 05:00:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:49.754 05:00:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:49.754 05:00:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:49.754 05:00:19 -- common/autotest_common.sh@889 -- # local i 00:17:49.754 05:00:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:49.754 05:00:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:49.754 05:00:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.012 05:00:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.270 [ 00:17:50.270 { 00:17:50.270 "name": "BaseBdev1", 00:17:50.270 "aliases": [ 00:17:50.270 "d71f1739-8e43-45e9-9fb0-5f53de92963a" 00:17:50.270 ], 00:17:50.270 "product_name": "Malloc disk", 00:17:50.270 "block_size": 512, 00:17:50.270 "num_blocks": 65536, 00:17:50.270 "uuid": "d71f1739-8e43-45e9-9fb0-5f53de92963a", 00:17:50.270 "assigned_rate_limits": { 00:17:50.270 "rw_ios_per_sec": 0, 00:17:50.270 "rw_mbytes_per_sec": 0, 00:17:50.270 "r_mbytes_per_sec": 0, 00:17:50.270 "w_mbytes_per_sec": 0 00:17:50.270 }, 00:17:50.270 "claimed": false, 00:17:50.270 "zoned": false, 00:17:50.270 "supported_io_types": { 00:17:50.270 "read": true, 00:17:50.270 "write": true, 00:17:50.270 "unmap": true, 00:17:50.270 "write_zeroes": true, 00:17:50.270 "flush": true, 00:17:50.270 "reset": true, 00:17:50.270 "compare": false, 00:17:50.270 "compare_and_write": false, 00:17:50.270 "abort": true, 00:17:50.270 "nvme_admin": false, 00:17:50.270 "nvme_io": false 00:17:50.270 }, 00:17:50.270 "memory_domains": [ 00:17:50.270 { 00:17:50.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.270 "dma_device_type": 2 00:17:50.270 } 00:17:50.270 ], 00:17:50.270 "driver_specific": {} 00:17:50.270 } 00:17:50.270 ] 00:17:50.270 05:00:20 -- common/autotest_common.sh@895 -- # return 0 00:17:50.270 05:00:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:50.529 [2024-04-27 05:00:20.338632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.529 [2024-04-27 05:00:20.341189] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.529 [2024-04-27 05:00:20.341274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.529 [2024-04-27 05:00:20.341291] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.529 [2024-04-27 05:00:20.341325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.529 05:00:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.530 05:00:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.788 05:00:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.788 "name": "Existed_Raid", 00:17:50.788 "uuid": "1d03c391-3ac9-4fe6-92a1-5d16549ddb30", 00:17:50.788 "strip_size_kb": 64, 00:17:50.788 "state": "configuring", 00:17:50.788 "raid_level": "concat", 00:17:50.788 "superblock": true, 00:17:50.788 "num_base_bdevs": 3, 00:17:50.788 "num_base_bdevs_discovered": 1, 00:17:50.788 "num_base_bdevs_operational": 3, 00:17:50.788 "base_bdevs_list": [ 00:17:50.788 { 00:17:50.788 "name": "BaseBdev1", 00:17:50.788 "uuid": "d71f1739-8e43-45e9-9fb0-5f53de92963a", 00:17:50.788 "is_configured": true, 00:17:50.788 "data_offset": 2048, 00:17:50.788 "data_size": 63488 00:17:50.788 }, 00:17:50.788 { 00:17:50.788 "name": "BaseBdev2", 00:17:50.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.788 "is_configured": false, 00:17:50.788 "data_offset": 0, 00:17:50.788 "data_size": 0 00:17:50.788 }, 00:17:50.788 { 00:17:50.788 "name": "BaseBdev3", 00:17:50.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.788 "is_configured": false, 00:17:50.788 "data_offset": 0, 00:17:50.788 "data_size": 0 00:17:50.788 } 00:17:50.788 ] 00:17:50.788 }' 00:17:50.788 05:00:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.788 05:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:51.723 05:00:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.723 [2024-04-27 05:00:21.561941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.723 BaseBdev2 00:17:51.723 05:00:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:51.723 05:00:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:51.723 05:00:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:51.723 05:00:21 -- common/autotest_common.sh@889 -- # local i 00:17:51.723 05:00:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:51.723 05:00:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:51.723 05:00:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.005 05:00:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:52.277 [ 00:17:52.277 { 00:17:52.277 "name": "BaseBdev2", 00:17:52.277 "aliases": [ 00:17:52.277 "26f4a15e-2faf-4a5a-b544-2371eeb50955" 00:17:52.277 ], 00:17:52.277 "product_name": "Malloc disk", 00:17:52.277 "block_size": 512, 00:17:52.277 "num_blocks": 65536, 00:17:52.277 "uuid": "26f4a15e-2faf-4a5a-b544-2371eeb50955", 00:17:52.277 "assigned_rate_limits": { 00:17:52.277 "rw_ios_per_sec": 0, 00:17:52.277 "rw_mbytes_per_sec": 0, 00:17:52.277 "r_mbytes_per_sec": 0, 00:17:52.277 "w_mbytes_per_sec": 0 00:17:52.277 }, 00:17:52.277 "claimed": true, 00:17:52.277 "claim_type": "exclusive_write", 00:17:52.277 "zoned": false, 00:17:52.277 "supported_io_types": { 00:17:52.277 "read": true, 00:17:52.277 "write": true, 00:17:52.277 "unmap": true, 00:17:52.277 "write_zeroes": true, 00:17:52.277 "flush": true, 00:17:52.277 "reset": true, 00:17:52.277 "compare": false, 00:17:52.277 "compare_and_write": false, 00:17:52.277 "abort": true, 00:17:52.277 "nvme_admin": false, 00:17:52.277 "nvme_io": false 00:17:52.277 }, 00:17:52.277 "memory_domains": [ 00:17:52.277 { 00:17:52.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.277 "dma_device_type": 2 00:17:52.277 } 00:17:52.277 ], 00:17:52.277 "driver_specific": {} 00:17:52.277 } 00:17:52.277 ] 00:17:52.277 05:00:22 -- common/autotest_common.sh@895 -- # return 0 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.277 05:00:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.536 05:00:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.536 "name": "Existed_Raid", 00:17:52.536 "uuid": "1d03c391-3ac9-4fe6-92a1-5d16549ddb30", 00:17:52.536 "strip_size_kb": 64, 00:17:52.536 "state": "configuring", 00:17:52.536 "raid_level": "concat", 00:17:52.536 "superblock": true, 00:17:52.536 "num_base_bdevs": 3, 00:17:52.536 "num_base_bdevs_discovered": 2, 00:17:52.536 "num_base_bdevs_operational": 3, 00:17:52.536 "base_bdevs_list": [ 00:17:52.536 { 00:17:52.537 "name": "BaseBdev1", 00:17:52.537 "uuid": "d71f1739-8e43-45e9-9fb0-5f53de92963a", 00:17:52.537 "is_configured": true, 00:17:52.537 "data_offset": 2048, 00:17:52.537 "data_size": 63488 00:17:52.537 }, 00:17:52.537 { 00:17:52.537 "name": "BaseBdev2", 00:17:52.537 "uuid": "26f4a15e-2faf-4a5a-b544-2371eeb50955", 00:17:52.537 "is_configured": true, 00:17:52.537 "data_offset": 2048, 00:17:52.537 "data_size": 63488 00:17:52.537 }, 00:17:52.537 { 00:17:52.537 "name": "BaseBdev3", 00:17:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.537 "is_configured": false, 00:17:52.537 "data_offset": 0, 00:17:52.537 "data_size": 0 00:17:52.537 } 00:17:52.537 ] 00:17:52.537 }' 00:17:52.537 05:00:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.537 05:00:22 -- common/autotest_common.sh@10 -- # set +x 00:17:53.471 05:00:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:53.471 [2024-04-27 05:00:23.295902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.471 [2024-04-27 05:00:23.296209] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:53.471 [2024-04-27 05:00:23.296227] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:53.471 [2024-04-27 05:00:23.296822] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:53.471 [2024-04-27 05:00:23.297428] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:53.471 [2024-04-27 05:00:23.297466] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:53.471 [2024-04-27 05:00:23.297649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.471 BaseBdev3 00:17:53.471 05:00:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:53.471 05:00:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:53.471 05:00:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:53.471 05:00:23 -- common/autotest_common.sh@889 -- # local i 00:17:53.471 05:00:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:53.471 05:00:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:53.471 05:00:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.729 05:00:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:53.987 [ 00:17:53.987 { 00:17:53.987 "name": "BaseBdev3", 00:17:53.987 "aliases": [ 00:17:53.987 "9eb92fad-6a22-478d-a7a9-b298c04a206a" 00:17:53.987 ], 00:17:53.987 "product_name": "Malloc disk", 00:17:53.987 "block_size": 512, 00:17:53.987 "num_blocks": 65536, 00:17:53.987 "uuid": "9eb92fad-6a22-478d-a7a9-b298c04a206a", 00:17:53.987 "assigned_rate_limits": { 00:17:53.987 "rw_ios_per_sec": 0, 00:17:53.987 "rw_mbytes_per_sec": 0, 00:17:53.987 "r_mbytes_per_sec": 0, 00:17:53.987 "w_mbytes_per_sec": 0 00:17:53.987 }, 00:17:53.987 "claimed": true, 00:17:53.987 "claim_type": "exclusive_write", 00:17:53.987 "zoned": false, 00:17:53.987 "supported_io_types": { 00:17:53.987 "read": true, 00:17:53.987 "write": true, 00:17:53.987 "unmap": true, 00:17:53.987 "write_zeroes": true, 00:17:53.987 "flush": true, 00:17:53.987 "reset": true, 00:17:53.987 "compare": false, 00:17:53.987 "compare_and_write": false, 00:17:53.987 "abort": true, 00:17:53.987 "nvme_admin": false, 00:17:53.987 "nvme_io": false 00:17:53.987 }, 00:17:53.987 "memory_domains": [ 00:17:53.987 { 00:17:53.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.987 "dma_device_type": 2 00:17:53.987 } 00:17:53.987 ], 00:17:53.987 "driver_specific": {} 00:17:53.987 } 00:17:53.987 ] 00:17:53.987 05:00:23 -- common/autotest_common.sh@895 -- # return 0 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.987 05:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.244 05:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.244 "name": "Existed_Raid", 00:17:54.244 "uuid": "1d03c391-3ac9-4fe6-92a1-5d16549ddb30", 00:17:54.244 "strip_size_kb": 64, 00:17:54.244 "state": "online", 00:17:54.244 "raid_level": "concat", 00:17:54.244 "superblock": true, 00:17:54.244 "num_base_bdevs": 3, 00:17:54.244 "num_base_bdevs_discovered": 3, 00:17:54.244 "num_base_bdevs_operational": 3, 00:17:54.244 "base_bdevs_list": [ 00:17:54.244 { 00:17:54.244 "name": "BaseBdev1", 00:17:54.244 "uuid": "d71f1739-8e43-45e9-9fb0-5f53de92963a", 00:17:54.244 "is_configured": true, 00:17:54.244 "data_offset": 2048, 00:17:54.244 "data_size": 63488 00:17:54.244 }, 00:17:54.244 { 00:17:54.244 "name": "BaseBdev2", 00:17:54.244 "uuid": "26f4a15e-2faf-4a5a-b544-2371eeb50955", 00:17:54.244 "is_configured": true, 00:17:54.244 "data_offset": 2048, 00:17:54.244 "data_size": 63488 00:17:54.244 }, 00:17:54.244 { 00:17:54.244 "name": "BaseBdev3", 00:17:54.244 "uuid": "9eb92fad-6a22-478d-a7a9-b298c04a206a", 00:17:54.244 "is_configured": true, 00:17:54.244 "data_offset": 2048, 00:17:54.244 "data_size": 63488 00:17:54.244 } 00:17:54.244 ] 00:17:54.244 }' 00:17:54.244 05:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.244 05:00:24 -- common/autotest_common.sh@10 -- # set +x 00:17:55.176 05:00:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.176 [2024-04-27 05:00:25.036608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.176 [2024-04-27 05:00:25.036669] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.176 [2024-04-27 05:00:25.036762] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.434 05:00:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.691 05:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.691 "name": "Existed_Raid", 00:17:55.691 "uuid": "1d03c391-3ac9-4fe6-92a1-5d16549ddb30", 00:17:55.691 "strip_size_kb": 64, 00:17:55.691 "state": "offline", 00:17:55.691 "raid_level": "concat", 00:17:55.691 "superblock": true, 00:17:55.691 "num_base_bdevs": 3, 00:17:55.691 "num_base_bdevs_discovered": 2, 00:17:55.691 "num_base_bdevs_operational": 2, 00:17:55.691 "base_bdevs_list": [ 00:17:55.691 { 00:17:55.691 "name": null, 00:17:55.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.691 "is_configured": false, 00:17:55.691 "data_offset": 2048, 00:17:55.691 "data_size": 63488 00:17:55.691 }, 00:17:55.691 { 00:17:55.691 "name": "BaseBdev2", 00:17:55.691 "uuid": "26f4a15e-2faf-4a5a-b544-2371eeb50955", 00:17:55.691 "is_configured": true, 00:17:55.691 "data_offset": 2048, 00:17:55.691 "data_size": 63488 00:17:55.691 }, 00:17:55.691 { 00:17:55.691 "name": "BaseBdev3", 00:17:55.691 "uuid": "9eb92fad-6a22-478d-a7a9-b298c04a206a", 00:17:55.691 "is_configured": true, 00:17:55.691 "data_offset": 2048, 00:17:55.691 "data_size": 63488 00:17:55.691 } 00:17:55.691 ] 00:17:55.691 }' 00:17:55.691 05:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.691 05:00:25 -- common/autotest_common.sh@10 -- # set +x 00:17:56.256 05:00:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:56.256 05:00:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.256 05:00:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.256 05:00:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:56.515 05:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:56.515 05:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.515 05:00:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:56.772 [2024-04-27 05:00:26.492114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.772 05:00:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:56.772 05:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.772 05:00:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.773 05:00:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:57.029 05:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:57.029 05:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.029 05:00:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:57.292 [2024-04-27 05:00:27.033809] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:57.292 [2024-04-27 05:00:27.033902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:57.292 05:00:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:57.292 05:00:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.292 05:00:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.292 05:00:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:57.569 05:00:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:57.569 05:00:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:57.569 05:00:27 -- bdev/bdev_raid.sh@287 -- # killprocess 128594 00:17:57.569 05:00:27 -- common/autotest_common.sh@926 -- # '[' -z 128594 ']' 00:17:57.569 05:00:27 -- common/autotest_common.sh@930 -- # kill -0 128594 00:17:57.569 05:00:27 -- common/autotest_common.sh@931 -- # uname 00:17:57.569 05:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.569 05:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128594 00:17:57.569 05:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:57.569 05:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:57.569 05:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128594' 00:17:57.569 killing process with pid 128594 00:17:57.569 05:00:27 -- common/autotest_common.sh@945 -- # kill 128594 00:17:57.569 05:00:27 -- common/autotest_common.sh@950 -- # wait 128594 00:17:57.569 [2024-04-27 05:00:27.335972] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.569 [2024-04-27 05:00:27.336093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:57.860 00:17:57.860 real 0m13.284s 00:17:57.860 user 0m24.145s 00:17:57.860 sys 0m1.842s 00:17:57.860 05:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.860 05:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.860 ************************************ 00:17:57.860 END TEST raid_state_function_test_sb 00:17:57.860 ************************************ 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:57.860 05:00:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:57.860 05:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:57.860 05:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.860 ************************************ 00:17:57.860 START TEST raid_superblock_test 00:17:57.860 ************************************ 00:17:57.860 05:00:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=128991 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128991 /var/tmp/spdk-raid.sock 00:17:57.860 05:00:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:57.860 05:00:27 -- common/autotest_common.sh@819 -- # '[' -z 128991 ']' 00:17:57.860 05:00:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:57.860 05:00:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.860 05:00:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:57.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:57.860 05:00:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.860 05:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:58.117 [2024-04-27 05:00:27.811412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:58.117 [2024-04-27 05:00:27.811687] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128991 ] 00:17:58.117 [2024-04-27 05:00:27.990700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.374 [2024-04-27 05:00:28.122043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.374 [2024-04-27 05:00:28.200807] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.939 05:00:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.939 05:00:28 -- common/autotest_common.sh@852 -- # return 0 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.939 05:00:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:59.197 malloc1 00:17:59.197 05:00:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.455 [2024-04-27 05:00:29.262792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.455 [2024-04-27 05:00:29.262941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.455 [2024-04-27 05:00:29.262993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:59.455 [2024-04-27 05:00:29.263059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.455 [2024-04-27 05:00:29.266137] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.455 [2024-04-27 05:00:29.266196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.455 pt1 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.455 05:00:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:59.713 malloc2 00:17:59.713 05:00:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.971 [2024-04-27 05:00:29.766167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.971 [2024-04-27 05:00:29.766295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.971 [2024-04-27 05:00:29.766353] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:59.971 [2024-04-27 05:00:29.766436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.971 [2024-04-27 05:00:29.769300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.971 [2024-04-27 05:00:29.769361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.971 pt2 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.971 05:00:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:00.228 malloc3 00:18:00.228 05:00:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.485 [2024-04-27 05:00:30.268504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.485 [2024-04-27 05:00:30.268655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.485 [2024-04-27 05:00:30.268735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:00.485 [2024-04-27 05:00:30.268792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.485 [2024-04-27 05:00:30.271650] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.485 [2024-04-27 05:00:30.271722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.485 pt3 00:18:00.485 05:00:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:00.485 05:00:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.485 05:00:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:00.743 [2024-04-27 05:00:30.500712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.743 [2024-04-27 05:00:30.503287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.743 [2024-04-27 05:00:30.503378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.743 [2024-04-27 05:00:30.503643] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:00.743 [2024-04-27 05:00:30.503672] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:00.743 [2024-04-27 05:00:30.503870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:00.743 [2024-04-27 05:00:30.504369] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:00.743 [2024-04-27 05:00:30.504394] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:00.743 [2024-04-27 05:00:30.504646] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.743 05:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.001 05:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.001 "name": "raid_bdev1", 00:18:01.001 "uuid": "80ce4b8a-4919-4990-a717-75cd1a10df6f", 00:18:01.001 "strip_size_kb": 64, 00:18:01.001 "state": "online", 00:18:01.001 "raid_level": "concat", 00:18:01.001 "superblock": true, 00:18:01.001 "num_base_bdevs": 3, 00:18:01.001 "num_base_bdevs_discovered": 3, 00:18:01.001 "num_base_bdevs_operational": 3, 00:18:01.001 "base_bdevs_list": [ 00:18:01.001 { 00:18:01.001 "name": "pt1", 00:18:01.001 "uuid": "ba810798-d214-5c2d-9a1c-170756336a43", 00:18:01.001 "is_configured": true, 00:18:01.001 "data_offset": 2048, 00:18:01.001 "data_size": 63488 00:18:01.001 }, 00:18:01.001 { 00:18:01.001 "name": "pt2", 00:18:01.001 "uuid": "4c10b0fd-26d5-53b9-99ac-63d29cc43737", 00:18:01.001 "is_configured": true, 00:18:01.001 "data_offset": 2048, 00:18:01.001 "data_size": 63488 00:18:01.001 }, 00:18:01.001 { 00:18:01.001 "name": "pt3", 00:18:01.001 "uuid": "472c9684-324e-5c78-a5b3-43aacb456230", 00:18:01.001 "is_configured": true, 00:18:01.001 "data_offset": 2048, 00:18:01.001 "data_size": 63488 00:18:01.001 } 00:18:01.001 ] 00:18:01.001 }' 00:18:01.001 05:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.001 05:00:30 -- common/autotest_common.sh@10 -- # set +x 00:18:01.566 05:00:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.566 05:00:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:01.823 [2024-04-27 05:00:31.669318] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.823 05:00:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=80ce4b8a-4919-4990-a717-75cd1a10df6f 00:18:01.823 05:00:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 80ce4b8a-4919-4990-a717-75cd1a10df6f ']' 00:18:01.823 05:00:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:02.081 [2024-04-27 05:00:31.929083] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.081 [2024-04-27 05:00:31.929142] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.081 [2024-04-27 05:00:31.929279] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.081 [2024-04-27 05:00:31.929377] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.081 [2024-04-27 05:00:31.929393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:02.081 05:00:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.081 05:00:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:02.338 05:00:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:02.338 05:00:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:02.338 05:00:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.338 05:00:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.596 05:00:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.596 05:00:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.856 05:00:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.856 05:00:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:03.115 05:00:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:03.115 05:00:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:03.374 05:00:33 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:03.374 05:00:33 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:03.374 05:00:33 -- common/autotest_common.sh@640 -- # local es=0 00:18:03.374 05:00:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:03.374 05:00:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.374 05:00:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.374 05:00:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.374 05:00:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.374 05:00:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.374 05:00:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:03.374 05:00:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.374 05:00:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:03.374 05:00:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:03.633 [2024-04-27 05:00:33.453463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:03.633 [2024-04-27 05:00:33.455996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:03.633 [2024-04-27 05:00:33.456066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:03.633 [2024-04-27 05:00:33.456138] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:03.633 [2024-04-27 05:00:33.456271] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:03.633 [2024-04-27 05:00:33.456326] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:03.633 [2024-04-27 05:00:33.456388] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.633 [2024-04-27 05:00:33.456403] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:18:03.633 request: 00:18:03.633 { 00:18:03.633 "name": "raid_bdev1", 00:18:03.633 "raid_level": "concat", 00:18:03.633 "base_bdevs": [ 00:18:03.633 "malloc1", 00:18:03.633 "malloc2", 00:18:03.633 "malloc3" 00:18:03.633 ], 00:18:03.633 "superblock": false, 00:18:03.633 "strip_size_kb": 64, 00:18:03.633 "method": "bdev_raid_create", 00:18:03.633 "req_id": 1 00:18:03.633 } 00:18:03.633 Got JSON-RPC error response 00:18:03.633 response: 00:18:03.633 { 00:18:03.633 "code": -17, 00:18:03.633 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:03.633 } 00:18:03.633 05:00:33 -- common/autotest_common.sh@643 -- # es=1 00:18:03.633 05:00:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:03.633 05:00:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:03.633 05:00:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:03.633 05:00:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:03.633 05:00:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.892 05:00:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:03.892 05:00:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:03.892 05:00:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.150 [2024-04-27 05:00:33.969483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.150 [2024-04-27 05:00:33.969614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.150 [2024-04-27 05:00:33.969665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:04.150 [2024-04-27 05:00:33.969692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.150 [2024-04-27 05:00:33.972536] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.150 [2024-04-27 05:00:33.972615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.150 [2024-04-27 05:00:33.972759] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:04.150 [2024-04-27 05:00:33.972847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.150 pt1 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.150 05:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.408 05:00:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.408 "name": "raid_bdev1", 00:18:04.408 "uuid": "80ce4b8a-4919-4990-a717-75cd1a10df6f", 00:18:04.408 "strip_size_kb": 64, 00:18:04.408 "state": "configuring", 00:18:04.408 "raid_level": "concat", 00:18:04.408 "superblock": true, 00:18:04.408 "num_base_bdevs": 3, 00:18:04.408 "num_base_bdevs_discovered": 1, 00:18:04.408 "num_base_bdevs_operational": 3, 00:18:04.408 "base_bdevs_list": [ 00:18:04.408 { 00:18:04.408 "name": "pt1", 00:18:04.408 "uuid": "ba810798-d214-5c2d-9a1c-170756336a43", 00:18:04.408 "is_configured": true, 00:18:04.408 "data_offset": 2048, 00:18:04.408 "data_size": 63488 00:18:04.408 }, 00:18:04.408 { 00:18:04.408 "name": null, 00:18:04.408 "uuid": "4c10b0fd-26d5-53b9-99ac-63d29cc43737", 00:18:04.408 "is_configured": false, 00:18:04.408 "data_offset": 2048, 00:18:04.408 "data_size": 63488 00:18:04.408 }, 00:18:04.408 { 00:18:04.408 "name": null, 00:18:04.408 "uuid": "472c9684-324e-5c78-a5b3-43aacb456230", 00:18:04.408 "is_configured": false, 00:18:04.408 "data_offset": 2048, 00:18:04.408 "data_size": 63488 00:18:04.408 } 00:18:04.408 ] 00:18:04.408 }' 00:18:04.409 05:00:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.409 05:00:34 -- common/autotest_common.sh@10 -- # set +x 00:18:05.015 05:00:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:05.015 05:00:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.273 [2024-04-27 05:00:35.061788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.273 [2024-04-27 05:00:35.061940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.273 [2024-04-27 05:00:35.062003] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:05.273 [2024-04-27 05:00:35.062030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.273 [2024-04-27 05:00:35.062602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.273 [2024-04-27 05:00:35.062652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.273 [2024-04-27 05:00:35.062788] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:05.273 [2024-04-27 05:00:35.062821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.273 pt2 00:18:05.273 05:00:35 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:05.531 [2024-04-27 05:00:35.333914] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.531 05:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.789 05:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.789 "name": "raid_bdev1", 00:18:05.789 "uuid": "80ce4b8a-4919-4990-a717-75cd1a10df6f", 00:18:05.789 "strip_size_kb": 64, 00:18:05.789 "state": "configuring", 00:18:05.789 "raid_level": "concat", 00:18:05.789 "superblock": true, 00:18:05.789 "num_base_bdevs": 3, 00:18:05.789 "num_base_bdevs_discovered": 1, 00:18:05.789 "num_base_bdevs_operational": 3, 00:18:05.789 "base_bdevs_list": [ 00:18:05.789 { 00:18:05.789 "name": "pt1", 00:18:05.789 "uuid": "ba810798-d214-5c2d-9a1c-170756336a43", 00:18:05.789 "is_configured": true, 00:18:05.789 "data_offset": 2048, 00:18:05.789 "data_size": 63488 00:18:05.789 }, 00:18:05.789 { 00:18:05.789 "name": null, 00:18:05.789 "uuid": "4c10b0fd-26d5-53b9-99ac-63d29cc43737", 00:18:05.789 "is_configured": false, 00:18:05.789 "data_offset": 2048, 00:18:05.789 "data_size": 63488 00:18:05.790 }, 00:18:05.790 { 00:18:05.790 "name": null, 00:18:05.790 "uuid": "472c9684-324e-5c78-a5b3-43aacb456230", 00:18:05.790 "is_configured": false, 00:18:05.790 "data_offset": 2048, 00:18:05.790 "data_size": 63488 00:18:05.790 } 00:18:05.790 ] 00:18:05.790 }' 00:18:05.790 05:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.790 05:00:35 -- common/autotest_common.sh@10 -- # set +x 00:18:06.357 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:06.357 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:06.357 05:00:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.615 [2024-04-27 05:00:36.510174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.615 [2024-04-27 05:00:36.510325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.615 [2024-04-27 05:00:36.510377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:06.615 [2024-04-27 05:00:36.510412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.615 [2024-04-27 05:00:36.511032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.615 [2024-04-27 05:00:36.511093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.615 [2024-04-27 05:00:36.511225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:06.615 [2024-04-27 05:00:36.511267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.872 pt2 00:18:06.872 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:06.872 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:06.872 05:00:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:07.130 [2024-04-27 05:00:36.778220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:07.130 [2024-04-27 05:00:36.778352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.130 [2024-04-27 05:00:36.778413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:07.130 [2024-04-27 05:00:36.778449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.130 [2024-04-27 05:00:36.779071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.130 [2024-04-27 05:00:36.779127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:07.130 [2024-04-27 05:00:36.779266] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:07.130 [2024-04-27 05:00:36.779300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.130 [2024-04-27 05:00:36.779495] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:07.130 [2024-04-27 05:00:36.779522] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:07.130 [2024-04-27 05:00:36.779624] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:07.130 [2024-04-27 05:00:36.780009] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:07.130 [2024-04-27 05:00:36.780031] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:07.130 [2024-04-27 05:00:36.780154] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.130 pt3 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.130 05:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.387 05:00:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.387 "name": "raid_bdev1", 00:18:07.387 "uuid": "80ce4b8a-4919-4990-a717-75cd1a10df6f", 00:18:07.387 "strip_size_kb": 64, 00:18:07.387 "state": "online", 00:18:07.387 "raid_level": "concat", 00:18:07.387 "superblock": true, 00:18:07.387 "num_base_bdevs": 3, 00:18:07.387 "num_base_bdevs_discovered": 3, 00:18:07.387 "num_base_bdevs_operational": 3, 00:18:07.387 "base_bdevs_list": [ 00:18:07.387 { 00:18:07.387 "name": "pt1", 00:18:07.387 "uuid": "ba810798-d214-5c2d-9a1c-170756336a43", 00:18:07.387 "is_configured": true, 00:18:07.387 "data_offset": 2048, 00:18:07.387 "data_size": 63488 00:18:07.387 }, 00:18:07.387 { 00:18:07.387 "name": "pt2", 00:18:07.387 "uuid": "4c10b0fd-26d5-53b9-99ac-63d29cc43737", 00:18:07.387 "is_configured": true, 00:18:07.387 "data_offset": 2048, 00:18:07.387 "data_size": 63488 00:18:07.387 }, 00:18:07.387 { 00:18:07.387 "name": "pt3", 00:18:07.387 "uuid": "472c9684-324e-5c78-a5b3-43aacb456230", 00:18:07.387 "is_configured": true, 00:18:07.387 "data_offset": 2048, 00:18:07.387 "data_size": 63488 00:18:07.387 } 00:18:07.387 ] 00:18:07.387 }' 00:18:07.387 05:00:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.387 05:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:07.953 05:00:37 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.953 05:00:37 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:08.211 [2024-04-27 05:00:37.942745] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.211 05:00:37 -- bdev/bdev_raid.sh@430 -- # '[' 80ce4b8a-4919-4990-a717-75cd1a10df6f '!=' 80ce4b8a-4919-4990-a717-75cd1a10df6f ']' 00:18:08.211 05:00:37 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:08.211 05:00:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:08.211 05:00:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:08.211 05:00:37 -- bdev/bdev_raid.sh@511 -- # killprocess 128991 00:18:08.211 05:00:37 -- common/autotest_common.sh@926 -- # '[' -z 128991 ']' 00:18:08.211 05:00:37 -- common/autotest_common.sh@930 -- # kill -0 128991 00:18:08.211 05:00:37 -- common/autotest_common.sh@931 -- # uname 00:18:08.211 05:00:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.211 05:00:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128991 00:18:08.211 killing process with pid 128991 00:18:08.211 05:00:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:08.211 05:00:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:08.211 05:00:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128991' 00:18:08.211 05:00:37 -- common/autotest_common.sh@945 -- # kill 128991 00:18:08.211 05:00:37 -- common/autotest_common.sh@950 -- # wait 128991 00:18:08.211 [2024-04-27 05:00:37.989758] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.211 [2024-04-27 05:00:37.989873] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.211 [2024-04-27 05:00:37.989950] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.211 [2024-04-27 05:00:37.989964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:08.211 [2024-04-27 05:00:38.068520] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.777 ************************************ 00:18:08.777 END TEST raid_superblock_test 00:18:08.777 ************************************ 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:08.777 00:18:08.777 real 0m10.668s 00:18:08.777 user 0m19.130s 00:18:08.777 sys 0m1.535s 00:18:08.777 05:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.777 05:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:08.777 05:00:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:08.777 05:00:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:08.777 05:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.777 ************************************ 00:18:08.777 START TEST raid_state_function_test 00:18:08.777 ************************************ 00:18:08.777 05:00:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=129303 00:18:08.777 Process raid pid: 129303 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129303' 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129303 /var/tmp/spdk-raid.sock 00:18:08.777 05:00:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:08.777 05:00:38 -- common/autotest_common.sh@819 -- # '[' -z 129303 ']' 00:18:08.777 05:00:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:08.777 05:00:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:08.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:08.777 05:00:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:08.777 05:00:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:08.777 05:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.777 [2024-04-27 05:00:38.530387] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:08.777 [2024-04-27 05:00:38.530655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.035 [2024-04-27 05:00:38.701116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.035 [2024-04-27 05:00:38.824390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.035 [2024-04-27 05:00:38.904170] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.984 05:00:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:09.984 05:00:39 -- common/autotest_common.sh@852 -- # return 0 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:09.984 [2024-04-27 05:00:39.743224] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.984 [2024-04-27 05:00:39.743603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.984 [2024-04-27 05:00:39.743730] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.984 [2024-04-27 05:00:39.743882] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.984 [2024-04-27 05:00:39.743994] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.984 [2024-04-27 05:00:39.744173] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.984 05:00:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.243 05:00:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.243 "name": "Existed_Raid", 00:18:10.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.243 "strip_size_kb": 0, 00:18:10.243 "state": "configuring", 00:18:10.243 "raid_level": "raid1", 00:18:10.243 "superblock": false, 00:18:10.243 "num_base_bdevs": 3, 00:18:10.243 "num_base_bdevs_discovered": 0, 00:18:10.243 "num_base_bdevs_operational": 3, 00:18:10.243 "base_bdevs_list": [ 00:18:10.243 { 00:18:10.243 "name": "BaseBdev1", 00:18:10.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.243 "is_configured": false, 00:18:10.243 "data_offset": 0, 00:18:10.243 "data_size": 0 00:18:10.243 }, 00:18:10.243 { 00:18:10.243 "name": "BaseBdev2", 00:18:10.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.243 "is_configured": false, 00:18:10.243 "data_offset": 0, 00:18:10.243 "data_size": 0 00:18:10.243 }, 00:18:10.243 { 00:18:10.243 "name": "BaseBdev3", 00:18:10.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.243 "is_configured": false, 00:18:10.243 "data_offset": 0, 00:18:10.243 "data_size": 0 00:18:10.243 } 00:18:10.243 ] 00:18:10.243 }' 00:18:10.243 05:00:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.243 05:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:10.808 05:00:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.068 [2024-04-27 05:00:40.879332] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.068 [2024-04-27 05:00:40.879696] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:11.068 05:00:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:11.329 [2024-04-27 05:00:41.155413] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.329 [2024-04-27 05:00:41.155794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.329 [2024-04-27 05:00:41.155919] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.329 [2024-04-27 05:00:41.156080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.329 [2024-04-27 05:00:41.156196] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.329 [2024-04-27 05:00:41.156271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.329 05:00:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.587 [2024-04-27 05:00:41.407178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.587 BaseBdev1 00:18:11.587 05:00:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:11.587 05:00:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:11.587 05:00:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:11.587 05:00:41 -- common/autotest_common.sh@889 -- # local i 00:18:11.587 05:00:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:11.587 05:00:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:11.587 05:00:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.846 05:00:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.105 [ 00:18:12.105 { 00:18:12.105 "name": "BaseBdev1", 00:18:12.105 "aliases": [ 00:18:12.105 "5443dd9c-8d99-4cc5-965b-977d7cf01908" 00:18:12.105 ], 00:18:12.105 "product_name": "Malloc disk", 00:18:12.105 "block_size": 512, 00:18:12.105 "num_blocks": 65536, 00:18:12.105 "uuid": "5443dd9c-8d99-4cc5-965b-977d7cf01908", 00:18:12.105 "assigned_rate_limits": { 00:18:12.105 "rw_ios_per_sec": 0, 00:18:12.105 "rw_mbytes_per_sec": 0, 00:18:12.105 "r_mbytes_per_sec": 0, 00:18:12.105 "w_mbytes_per_sec": 0 00:18:12.105 }, 00:18:12.105 "claimed": true, 00:18:12.105 "claim_type": "exclusive_write", 00:18:12.105 "zoned": false, 00:18:12.105 "supported_io_types": { 00:18:12.105 "read": true, 00:18:12.105 "write": true, 00:18:12.105 "unmap": true, 00:18:12.105 "write_zeroes": true, 00:18:12.105 "flush": true, 00:18:12.105 "reset": true, 00:18:12.105 "compare": false, 00:18:12.105 "compare_and_write": false, 00:18:12.105 "abort": true, 00:18:12.105 "nvme_admin": false, 00:18:12.105 "nvme_io": false 00:18:12.105 }, 00:18:12.105 "memory_domains": [ 00:18:12.105 { 00:18:12.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.105 "dma_device_type": 2 00:18:12.105 } 00:18:12.105 ], 00:18:12.105 "driver_specific": {} 00:18:12.105 } 00:18:12.105 ] 00:18:12.105 05:00:41 -- common/autotest_common.sh@895 -- # return 0 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.105 05:00:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.364 05:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.364 "name": "Existed_Raid", 00:18:12.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.364 "strip_size_kb": 0, 00:18:12.364 "state": "configuring", 00:18:12.364 "raid_level": "raid1", 00:18:12.364 "superblock": false, 00:18:12.364 "num_base_bdevs": 3, 00:18:12.364 "num_base_bdevs_discovered": 1, 00:18:12.364 "num_base_bdevs_operational": 3, 00:18:12.364 "base_bdevs_list": [ 00:18:12.364 { 00:18:12.364 "name": "BaseBdev1", 00:18:12.364 "uuid": "5443dd9c-8d99-4cc5-965b-977d7cf01908", 00:18:12.364 "is_configured": true, 00:18:12.364 "data_offset": 0, 00:18:12.364 "data_size": 65536 00:18:12.364 }, 00:18:12.364 { 00:18:12.364 "name": "BaseBdev2", 00:18:12.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.364 "is_configured": false, 00:18:12.364 "data_offset": 0, 00:18:12.364 "data_size": 0 00:18:12.364 }, 00:18:12.364 { 00:18:12.364 "name": "BaseBdev3", 00:18:12.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.364 "is_configured": false, 00:18:12.364 "data_offset": 0, 00:18:12.364 "data_size": 0 00:18:12.364 } 00:18:12.364 ] 00:18:12.364 }' 00:18:12.364 05:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.364 05:00:42 -- common/autotest_common.sh@10 -- # set +x 00:18:13.298 05:00:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.298 [2024-04-27 05:00:43.047739] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.298 [2024-04-27 05:00:43.048067] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:13.298 05:00:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:13.298 05:00:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:13.556 [2024-04-27 05:00:43.288011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.556 [2024-04-27 05:00:43.290807] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.556 [2024-04-27 05:00:43.291003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.556 [2024-04-27 05:00:43.291150] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.556 [2024-04-27 05:00:43.291312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.556 05:00:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.814 05:00:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.814 "name": "Existed_Raid", 00:18:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.814 "strip_size_kb": 0, 00:18:13.814 "state": "configuring", 00:18:13.814 "raid_level": "raid1", 00:18:13.814 "superblock": false, 00:18:13.814 "num_base_bdevs": 3, 00:18:13.814 "num_base_bdevs_discovered": 1, 00:18:13.814 "num_base_bdevs_operational": 3, 00:18:13.814 "base_bdevs_list": [ 00:18:13.814 { 00:18:13.814 "name": "BaseBdev1", 00:18:13.814 "uuid": "5443dd9c-8d99-4cc5-965b-977d7cf01908", 00:18:13.814 "is_configured": true, 00:18:13.814 "data_offset": 0, 00:18:13.814 "data_size": 65536 00:18:13.814 }, 00:18:13.814 { 00:18:13.814 "name": "BaseBdev2", 00:18:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.814 "is_configured": false, 00:18:13.814 "data_offset": 0, 00:18:13.814 "data_size": 0 00:18:13.814 }, 00:18:13.814 { 00:18:13.814 "name": "BaseBdev3", 00:18:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.814 "is_configured": false, 00:18:13.814 "data_offset": 0, 00:18:13.814 "data_size": 0 00:18:13.814 } 00:18:13.814 ] 00:18:13.814 }' 00:18:13.814 05:00:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.814 05:00:43 -- common/autotest_common.sh@10 -- # set +x 00:18:14.381 05:00:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.640 [2024-04-27 05:00:44.446060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.640 BaseBdev2 00:18:14.640 05:00:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:14.640 05:00:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:14.640 05:00:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:14.640 05:00:44 -- common/autotest_common.sh@889 -- # local i 00:18:14.640 05:00:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:14.640 05:00:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:14.640 05:00:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.898 05:00:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.164 [ 00:18:15.164 { 00:18:15.164 "name": "BaseBdev2", 00:18:15.164 "aliases": [ 00:18:15.164 "64a2246d-9783-4c86-a8f5-6bd2c01ec746" 00:18:15.164 ], 00:18:15.164 "product_name": "Malloc disk", 00:18:15.164 "block_size": 512, 00:18:15.164 "num_blocks": 65536, 00:18:15.164 "uuid": "64a2246d-9783-4c86-a8f5-6bd2c01ec746", 00:18:15.164 "assigned_rate_limits": { 00:18:15.165 "rw_ios_per_sec": 0, 00:18:15.165 "rw_mbytes_per_sec": 0, 00:18:15.165 "r_mbytes_per_sec": 0, 00:18:15.165 "w_mbytes_per_sec": 0 00:18:15.165 }, 00:18:15.165 "claimed": true, 00:18:15.165 "claim_type": "exclusive_write", 00:18:15.165 "zoned": false, 00:18:15.165 "supported_io_types": { 00:18:15.165 "read": true, 00:18:15.165 "write": true, 00:18:15.165 "unmap": true, 00:18:15.165 "write_zeroes": true, 00:18:15.165 "flush": true, 00:18:15.165 "reset": true, 00:18:15.165 "compare": false, 00:18:15.165 "compare_and_write": false, 00:18:15.165 "abort": true, 00:18:15.165 "nvme_admin": false, 00:18:15.165 "nvme_io": false 00:18:15.165 }, 00:18:15.165 "memory_domains": [ 00:18:15.165 { 00:18:15.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.165 "dma_device_type": 2 00:18:15.165 } 00:18:15.165 ], 00:18:15.165 "driver_specific": {} 00:18:15.165 } 00:18:15.165 ] 00:18:15.165 05:00:44 -- common/autotest_common.sh@895 -- # return 0 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.165 05:00:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.427 05:00:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.427 "name": "Existed_Raid", 00:18:15.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.427 "strip_size_kb": 0, 00:18:15.427 "state": "configuring", 00:18:15.427 "raid_level": "raid1", 00:18:15.427 "superblock": false, 00:18:15.427 "num_base_bdevs": 3, 00:18:15.427 "num_base_bdevs_discovered": 2, 00:18:15.427 "num_base_bdevs_operational": 3, 00:18:15.427 "base_bdevs_list": [ 00:18:15.427 { 00:18:15.427 "name": "BaseBdev1", 00:18:15.427 "uuid": "5443dd9c-8d99-4cc5-965b-977d7cf01908", 00:18:15.427 "is_configured": true, 00:18:15.427 "data_offset": 0, 00:18:15.427 "data_size": 65536 00:18:15.427 }, 00:18:15.427 { 00:18:15.427 "name": "BaseBdev2", 00:18:15.427 "uuid": "64a2246d-9783-4c86-a8f5-6bd2c01ec746", 00:18:15.427 "is_configured": true, 00:18:15.427 "data_offset": 0, 00:18:15.427 "data_size": 65536 00:18:15.427 }, 00:18:15.427 { 00:18:15.427 "name": "BaseBdev3", 00:18:15.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.427 "is_configured": false, 00:18:15.427 "data_offset": 0, 00:18:15.427 "data_size": 0 00:18:15.427 } 00:18:15.427 ] 00:18:15.427 }' 00:18:15.427 05:00:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.427 05:00:45 -- common/autotest_common.sh@10 -- # set +x 00:18:16.366 05:00:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.366 [2024-04-27 05:00:46.175343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.366 [2024-04-27 05:00:46.175428] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:16.366 [2024-04-27 05:00:46.175441] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.366 [2024-04-27 05:00:46.175599] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:16.367 [2024-04-27 05:00:46.176113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:16.367 [2024-04-27 05:00:46.176142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:16.367 [2024-04-27 05:00:46.176447] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.367 BaseBdev3 00:18:16.367 05:00:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:16.367 05:00:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:16.367 05:00:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:16.367 05:00:46 -- common/autotest_common.sh@889 -- # local i 00:18:16.367 05:00:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:16.367 05:00:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:16.367 05:00:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.627 05:00:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.884 [ 00:18:16.884 { 00:18:16.884 "name": "BaseBdev3", 00:18:16.884 "aliases": [ 00:18:16.884 "6a4a5087-5bdf-47c7-abab-98a9052f036d" 00:18:16.884 ], 00:18:16.884 "product_name": "Malloc disk", 00:18:16.884 "block_size": 512, 00:18:16.884 "num_blocks": 65536, 00:18:16.884 "uuid": "6a4a5087-5bdf-47c7-abab-98a9052f036d", 00:18:16.884 "assigned_rate_limits": { 00:18:16.884 "rw_ios_per_sec": 0, 00:18:16.884 "rw_mbytes_per_sec": 0, 00:18:16.884 "r_mbytes_per_sec": 0, 00:18:16.884 "w_mbytes_per_sec": 0 00:18:16.884 }, 00:18:16.884 "claimed": true, 00:18:16.884 "claim_type": "exclusive_write", 00:18:16.884 "zoned": false, 00:18:16.884 "supported_io_types": { 00:18:16.884 "read": true, 00:18:16.884 "write": true, 00:18:16.884 "unmap": true, 00:18:16.884 "write_zeroes": true, 00:18:16.884 "flush": true, 00:18:16.884 "reset": true, 00:18:16.884 "compare": false, 00:18:16.884 "compare_and_write": false, 00:18:16.884 "abort": true, 00:18:16.884 "nvme_admin": false, 00:18:16.884 "nvme_io": false 00:18:16.884 }, 00:18:16.884 "memory_domains": [ 00:18:16.884 { 00:18:16.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.884 "dma_device_type": 2 00:18:16.884 } 00:18:16.884 ], 00:18:16.884 "driver_specific": {} 00:18:16.884 } 00:18:16.884 ] 00:18:16.884 05:00:46 -- common/autotest_common.sh@895 -- # return 0 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.884 05:00:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.142 05:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.142 "name": "Existed_Raid", 00:18:17.142 "uuid": "bdd5c584-e88d-4636-aa21-6e400c1fcd9d", 00:18:17.142 "strip_size_kb": 0, 00:18:17.142 "state": "online", 00:18:17.142 "raid_level": "raid1", 00:18:17.142 "superblock": false, 00:18:17.142 "num_base_bdevs": 3, 00:18:17.142 "num_base_bdevs_discovered": 3, 00:18:17.142 "num_base_bdevs_operational": 3, 00:18:17.142 "base_bdevs_list": [ 00:18:17.142 { 00:18:17.142 "name": "BaseBdev1", 00:18:17.142 "uuid": "5443dd9c-8d99-4cc5-965b-977d7cf01908", 00:18:17.142 "is_configured": true, 00:18:17.142 "data_offset": 0, 00:18:17.142 "data_size": 65536 00:18:17.142 }, 00:18:17.142 { 00:18:17.142 "name": "BaseBdev2", 00:18:17.142 "uuid": "64a2246d-9783-4c86-a8f5-6bd2c01ec746", 00:18:17.142 "is_configured": true, 00:18:17.142 "data_offset": 0, 00:18:17.142 "data_size": 65536 00:18:17.142 }, 00:18:17.142 { 00:18:17.142 "name": "BaseBdev3", 00:18:17.142 "uuid": "6a4a5087-5bdf-47c7-abab-98a9052f036d", 00:18:17.142 "is_configured": true, 00:18:17.142 "data_offset": 0, 00:18:17.142 "data_size": 65536 00:18:17.142 } 00:18:17.142 ] 00:18:17.142 }' 00:18:17.142 05:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.142 05:00:46 -- common/autotest_common.sh@10 -- # set +x 00:18:17.708 05:00:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:17.966 [2024-04-27 05:00:47.854783] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.249 05:00:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.508 05:00:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.508 "name": "Existed_Raid", 00:18:18.508 "uuid": "bdd5c584-e88d-4636-aa21-6e400c1fcd9d", 00:18:18.508 "strip_size_kb": 0, 00:18:18.508 "state": "online", 00:18:18.508 "raid_level": "raid1", 00:18:18.508 "superblock": false, 00:18:18.508 "num_base_bdevs": 3, 00:18:18.508 "num_base_bdevs_discovered": 2, 00:18:18.508 "num_base_bdevs_operational": 2, 00:18:18.508 "base_bdevs_list": [ 00:18:18.508 { 00:18:18.508 "name": null, 00:18:18.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.508 "is_configured": false, 00:18:18.508 "data_offset": 0, 00:18:18.508 "data_size": 65536 00:18:18.508 }, 00:18:18.508 { 00:18:18.508 "name": "BaseBdev2", 00:18:18.508 "uuid": "64a2246d-9783-4c86-a8f5-6bd2c01ec746", 00:18:18.508 "is_configured": true, 00:18:18.508 "data_offset": 0, 00:18:18.508 "data_size": 65536 00:18:18.508 }, 00:18:18.508 { 00:18:18.508 "name": "BaseBdev3", 00:18:18.508 "uuid": "6a4a5087-5bdf-47c7-abab-98a9052f036d", 00:18:18.508 "is_configured": true, 00:18:18.508 "data_offset": 0, 00:18:18.508 "data_size": 65536 00:18:18.508 } 00:18:18.508 ] 00:18:18.508 }' 00:18:18.508 05:00:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.508 05:00:48 -- common/autotest_common.sh@10 -- # set +x 00:18:19.073 05:00:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:19.073 05:00:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.073 05:00:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.073 05:00:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:19.330 05:00:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:19.330 05:00:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.330 05:00:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:19.595 [2024-04-27 05:00:49.437192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.595 05:00:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:19.595 05:00:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.595 05:00:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.595 05:00:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:19.855 05:00:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:19.855 05:00:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.855 05:00:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:20.134 [2024-04-27 05:00:50.017420] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.134 [2024-04-27 05:00:50.017754] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.415 [2024-04-27 05:00:50.017989] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.415 [2024-04-27 05:00:50.041347] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.415 [2024-04-27 05:00:50.041682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:20.415 05:00:50 -- bdev/bdev_raid.sh@287 -- # killprocess 129303 00:18:20.415 05:00:50 -- common/autotest_common.sh@926 -- # '[' -z 129303 ']' 00:18:20.415 05:00:50 -- common/autotest_common.sh@930 -- # kill -0 129303 00:18:20.415 05:00:50 -- common/autotest_common.sh@931 -- # uname 00:18:20.415 05:00:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.415 05:00:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129303 00:18:20.673 05:00:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.673 05:00:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.673 05:00:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129303' 00:18:20.673 killing process with pid 129303 00:18:20.673 05:00:50 -- common/autotest_common.sh@945 -- # kill 129303 00:18:20.673 05:00:50 -- common/autotest_common.sh@950 -- # wait 129303 00:18:20.673 [2024-04-27 05:00:50.329610] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.673 [2024-04-27 05:00:50.329737] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.932 ************************************ 00:18:20.932 END TEST raid_state_function_test 00:18:20.932 ************************************ 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:20.932 00:18:20.932 real 0m12.225s 00:18:20.932 user 0m22.129s 00:18:20.932 sys 0m1.805s 00:18:20.932 05:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.932 05:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:20.932 05:00:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:20.932 05:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:20.932 05:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.932 ************************************ 00:18:20.932 START TEST raid_state_function_test_sb 00:18:20.932 ************************************ 00:18:20.932 05:00:50 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=129686 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129686' 00:18:20.932 Process raid pid: 129686 00:18:20.932 05:00:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129686 /var/tmp/spdk-raid.sock 00:18:20.932 05:00:50 -- common/autotest_common.sh@819 -- # '[' -z 129686 ']' 00:18:20.932 05:00:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.932 05:00:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:20.932 05:00:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.932 05:00:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:20.932 05:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.932 [2024-04-27 05:00:50.815610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:20.932 [2024-04-27 05:00:50.816096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.190 [2024-04-27 05:00:50.979210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.446 [2024-04-27 05:00:51.100792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.446 [2024-04-27 05:00:51.183620] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.011 05:00:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.012 05:00:51 -- common/autotest_common.sh@852 -- # return 0 00:18:22.012 05:00:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:22.269 [2024-04-27 05:00:52.120462] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.269 [2024-04-27 05:00:52.120878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.269 [2024-04-27 05:00:52.121015] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.269 [2024-04-27 05:00:52.121087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.269 [2024-04-27 05:00:52.121237] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.269 [2024-04-27 05:00:52.121339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.269 05:00:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.527 05:00:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.527 "name": "Existed_Raid", 00:18:22.527 "uuid": "af8638f8-e43a-424f-b828-4406b3df4e39", 00:18:22.527 "strip_size_kb": 0, 00:18:22.527 "state": "configuring", 00:18:22.527 "raid_level": "raid1", 00:18:22.527 "superblock": true, 00:18:22.527 "num_base_bdevs": 3, 00:18:22.527 "num_base_bdevs_discovered": 0, 00:18:22.527 "num_base_bdevs_operational": 3, 00:18:22.527 "base_bdevs_list": [ 00:18:22.527 { 00:18:22.527 "name": "BaseBdev1", 00:18:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.527 "is_configured": false, 00:18:22.527 "data_offset": 0, 00:18:22.527 "data_size": 0 00:18:22.527 }, 00:18:22.527 { 00:18:22.527 "name": "BaseBdev2", 00:18:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.527 "is_configured": false, 00:18:22.527 "data_offset": 0, 00:18:22.527 "data_size": 0 00:18:22.527 }, 00:18:22.527 { 00:18:22.527 "name": "BaseBdev3", 00:18:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.527 "is_configured": false, 00:18:22.527 "data_offset": 0, 00:18:22.527 "data_size": 0 00:18:22.527 } 00:18:22.527 ] 00:18:22.527 }' 00:18:22.527 05:00:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.527 05:00:52 -- common/autotest_common.sh@10 -- # set +x 00:18:23.459 05:00:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.459 [2024-04-27 05:00:53.300588] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.459 [2024-04-27 05:00:53.300944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:23.459 05:00:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:23.717 [2024-04-27 05:00:53.536678] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.717 [2024-04-27 05:00:53.537071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.717 [2024-04-27 05:00:53.537196] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.717 [2024-04-27 05:00:53.537356] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.717 [2024-04-27 05:00:53.537475] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.717 [2024-04-27 05:00:53.537640] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.717 05:00:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.973 [2024-04-27 05:00:53.823848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.973 BaseBdev1 00:18:23.973 05:00:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:23.973 05:00:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:23.973 05:00:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:23.973 05:00:53 -- common/autotest_common.sh@889 -- # local i 00:18:23.973 05:00:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:23.973 05:00:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:23.974 05:00:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.231 05:00:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.489 [ 00:18:24.489 { 00:18:24.489 "name": "BaseBdev1", 00:18:24.489 "aliases": [ 00:18:24.489 "6996e4e8-2df7-4acf-88d9-e4d17d74986b" 00:18:24.489 ], 00:18:24.489 "product_name": "Malloc disk", 00:18:24.489 "block_size": 512, 00:18:24.489 "num_blocks": 65536, 00:18:24.489 "uuid": "6996e4e8-2df7-4acf-88d9-e4d17d74986b", 00:18:24.489 "assigned_rate_limits": { 00:18:24.489 "rw_ios_per_sec": 0, 00:18:24.489 "rw_mbytes_per_sec": 0, 00:18:24.489 "r_mbytes_per_sec": 0, 00:18:24.489 "w_mbytes_per_sec": 0 00:18:24.489 }, 00:18:24.489 "claimed": true, 00:18:24.489 "claim_type": "exclusive_write", 00:18:24.489 "zoned": false, 00:18:24.489 "supported_io_types": { 00:18:24.489 "read": true, 00:18:24.489 "write": true, 00:18:24.489 "unmap": true, 00:18:24.489 "write_zeroes": true, 00:18:24.489 "flush": true, 00:18:24.489 "reset": true, 00:18:24.489 "compare": false, 00:18:24.489 "compare_and_write": false, 00:18:24.489 "abort": true, 00:18:24.489 "nvme_admin": false, 00:18:24.489 "nvme_io": false 00:18:24.489 }, 00:18:24.489 "memory_domains": [ 00:18:24.489 { 00:18:24.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.489 "dma_device_type": 2 00:18:24.489 } 00:18:24.489 ], 00:18:24.489 "driver_specific": {} 00:18:24.489 } 00:18:24.489 ] 00:18:24.489 05:00:54 -- common/autotest_common.sh@895 -- # return 0 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.489 05:00:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.747 05:00:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.747 "name": "Existed_Raid", 00:18:24.747 "uuid": "5d3030a9-166b-4995-bde9-91934458972d", 00:18:24.747 "strip_size_kb": 0, 00:18:24.747 "state": "configuring", 00:18:24.747 "raid_level": "raid1", 00:18:24.747 "superblock": true, 00:18:24.747 "num_base_bdevs": 3, 00:18:24.747 "num_base_bdevs_discovered": 1, 00:18:24.747 "num_base_bdevs_operational": 3, 00:18:24.747 "base_bdevs_list": [ 00:18:24.747 { 00:18:24.747 "name": "BaseBdev1", 00:18:24.747 "uuid": "6996e4e8-2df7-4acf-88d9-e4d17d74986b", 00:18:24.747 "is_configured": true, 00:18:24.747 "data_offset": 2048, 00:18:24.747 "data_size": 63488 00:18:24.747 }, 00:18:24.747 { 00:18:24.747 "name": "BaseBdev2", 00:18:24.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.747 "is_configured": false, 00:18:24.747 "data_offset": 0, 00:18:24.747 "data_size": 0 00:18:24.747 }, 00:18:24.747 { 00:18:24.747 "name": "BaseBdev3", 00:18:24.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.747 "is_configured": false, 00:18:24.747 "data_offset": 0, 00:18:24.747 "data_size": 0 00:18:24.747 } 00:18:24.747 ] 00:18:24.747 }' 00:18:24.747 05:00:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.747 05:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:25.311 05:00:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.568 [2024-04-27 05:00:55.464383] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.568 [2024-04-27 05:00:55.464727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:25.826 05:00:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:25.826 05:00:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:26.083 05:00:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:26.084 BaseBdev1 00:18:26.342 05:00:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:26.342 05:00:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:26.342 05:00:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.342 05:00:55 -- common/autotest_common.sh@889 -- # local i 00:18:26.342 05:00:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.342 05:00:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.342 05:00:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.342 05:00:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.600 [ 00:18:26.600 { 00:18:26.600 "name": "BaseBdev1", 00:18:26.600 "aliases": [ 00:18:26.600 "088aada1-9a72-40d6-bd01-217ca3d003a1" 00:18:26.600 ], 00:18:26.600 "product_name": "Malloc disk", 00:18:26.600 "block_size": 512, 00:18:26.600 "num_blocks": 65536, 00:18:26.600 "uuid": "088aada1-9a72-40d6-bd01-217ca3d003a1", 00:18:26.600 "assigned_rate_limits": { 00:18:26.600 "rw_ios_per_sec": 0, 00:18:26.600 "rw_mbytes_per_sec": 0, 00:18:26.600 "r_mbytes_per_sec": 0, 00:18:26.600 "w_mbytes_per_sec": 0 00:18:26.600 }, 00:18:26.601 "claimed": false, 00:18:26.601 "zoned": false, 00:18:26.601 "supported_io_types": { 00:18:26.601 "read": true, 00:18:26.601 "write": true, 00:18:26.601 "unmap": true, 00:18:26.601 "write_zeroes": true, 00:18:26.601 "flush": true, 00:18:26.601 "reset": true, 00:18:26.601 "compare": false, 00:18:26.601 "compare_and_write": false, 00:18:26.601 "abort": true, 00:18:26.601 "nvme_admin": false, 00:18:26.601 "nvme_io": false 00:18:26.601 }, 00:18:26.601 "memory_domains": [ 00:18:26.601 { 00:18:26.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.601 "dma_device_type": 2 00:18:26.601 } 00:18:26.601 ], 00:18:26.601 "driver_specific": {} 00:18:26.601 } 00:18:26.601 ] 00:18:26.859 05:00:56 -- common/autotest_common.sh@895 -- # return 0 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:26.859 [2024-04-27 05:00:56.733061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.859 [2024-04-27 05:00:56.735809] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.859 [2024-04-27 05:00:56.736009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.859 [2024-04-27 05:00:56.736142] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.859 [2024-04-27 05:00:56.736297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.859 05:00:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.127 "name": "Existed_Raid", 00:18:27.127 "uuid": "2c46615c-b87f-49fe-ad32-5ce3e60a64c4", 00:18:27.127 "strip_size_kb": 0, 00:18:27.127 "state": "configuring", 00:18:27.127 "raid_level": "raid1", 00:18:27.127 "superblock": true, 00:18:27.127 "num_base_bdevs": 3, 00:18:27.127 "num_base_bdevs_discovered": 1, 00:18:27.127 "num_base_bdevs_operational": 3, 00:18:27.127 "base_bdevs_list": [ 00:18:27.127 { 00:18:27.127 "name": "BaseBdev1", 00:18:27.127 "uuid": "088aada1-9a72-40d6-bd01-217ca3d003a1", 00:18:27.127 "is_configured": true, 00:18:27.127 "data_offset": 2048, 00:18:27.127 "data_size": 63488 00:18:27.127 }, 00:18:27.127 { 00:18:27.127 "name": "BaseBdev2", 00:18:27.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.127 "is_configured": false, 00:18:27.127 "data_offset": 0, 00:18:27.127 "data_size": 0 00:18:27.127 }, 00:18:27.127 { 00:18:27.127 "name": "BaseBdev3", 00:18:27.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.127 "is_configured": false, 00:18:27.127 "data_offset": 0, 00:18:27.127 "data_size": 0 00:18:27.127 } 00:18:27.127 ] 00:18:27.127 }' 00:18:27.127 05:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.127 05:00:56 -- common/autotest_common.sh@10 -- # set +x 00:18:28.060 05:00:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.060 [2024-04-27 05:00:57.894335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.060 BaseBdev2 00:18:28.060 05:00:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:28.060 05:00:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:28.060 05:00:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:28.061 05:00:57 -- common/autotest_common.sh@889 -- # local i 00:18:28.061 05:00:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:28.061 05:00:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:28.061 05:00:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.317 05:00:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.575 [ 00:18:28.575 { 00:18:28.575 "name": "BaseBdev2", 00:18:28.575 "aliases": [ 00:18:28.575 "30667bb0-c4ac-4c52-bed4-e40f8e21de8b" 00:18:28.575 ], 00:18:28.575 "product_name": "Malloc disk", 00:18:28.575 "block_size": 512, 00:18:28.575 "num_blocks": 65536, 00:18:28.575 "uuid": "30667bb0-c4ac-4c52-bed4-e40f8e21de8b", 00:18:28.575 "assigned_rate_limits": { 00:18:28.575 "rw_ios_per_sec": 0, 00:18:28.575 "rw_mbytes_per_sec": 0, 00:18:28.575 "r_mbytes_per_sec": 0, 00:18:28.575 "w_mbytes_per_sec": 0 00:18:28.575 }, 00:18:28.575 "claimed": true, 00:18:28.575 "claim_type": "exclusive_write", 00:18:28.575 "zoned": false, 00:18:28.575 "supported_io_types": { 00:18:28.575 "read": true, 00:18:28.575 "write": true, 00:18:28.575 "unmap": true, 00:18:28.575 "write_zeroes": true, 00:18:28.575 "flush": true, 00:18:28.575 "reset": true, 00:18:28.575 "compare": false, 00:18:28.575 "compare_and_write": false, 00:18:28.575 "abort": true, 00:18:28.575 "nvme_admin": false, 00:18:28.575 "nvme_io": false 00:18:28.575 }, 00:18:28.575 "memory_domains": [ 00:18:28.575 { 00:18:28.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.575 "dma_device_type": 2 00:18:28.575 } 00:18:28.575 ], 00:18:28.575 "driver_specific": {} 00:18:28.575 } 00:18:28.575 ] 00:18:28.575 05:00:58 -- common/autotest_common.sh@895 -- # return 0 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.575 05:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.833 05:00:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.833 "name": "Existed_Raid", 00:18:28.833 "uuid": "2c46615c-b87f-49fe-ad32-5ce3e60a64c4", 00:18:28.833 "strip_size_kb": 0, 00:18:28.833 "state": "configuring", 00:18:28.833 "raid_level": "raid1", 00:18:28.833 "superblock": true, 00:18:28.833 "num_base_bdevs": 3, 00:18:28.833 "num_base_bdevs_discovered": 2, 00:18:28.833 "num_base_bdevs_operational": 3, 00:18:28.833 "base_bdevs_list": [ 00:18:28.833 { 00:18:28.833 "name": "BaseBdev1", 00:18:28.833 "uuid": "088aada1-9a72-40d6-bd01-217ca3d003a1", 00:18:28.833 "is_configured": true, 00:18:28.833 "data_offset": 2048, 00:18:28.833 "data_size": 63488 00:18:28.833 }, 00:18:28.833 { 00:18:28.833 "name": "BaseBdev2", 00:18:28.833 "uuid": "30667bb0-c4ac-4c52-bed4-e40f8e21de8b", 00:18:28.833 "is_configured": true, 00:18:28.833 "data_offset": 2048, 00:18:28.833 "data_size": 63488 00:18:28.833 }, 00:18:28.833 { 00:18:28.833 "name": "BaseBdev3", 00:18:28.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.833 "is_configured": false, 00:18:28.833 "data_offset": 0, 00:18:28.833 "data_size": 0 00:18:28.833 } 00:18:28.833 ] 00:18:28.833 }' 00:18:28.833 05:00:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.833 05:00:58 -- common/autotest_common.sh@10 -- # set +x 00:18:29.766 05:00:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.766 [2024-04-27 05:00:59.619545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.766 [2024-04-27 05:00:59.620182] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:29.766 [2024-04-27 05:00:59.620367] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.766 [2024-04-27 05:00:59.620666] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:29.766 [2024-04-27 05:00:59.621281] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:29.766 [2024-04-27 05:00:59.621419] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:29.766 BaseBdev3 00:18:29.766 [2024-04-27 05:00:59.621743] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.766 05:00:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:29.766 05:00:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:29.766 05:00:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:29.766 05:00:59 -- common/autotest_common.sh@889 -- # local i 00:18:29.766 05:00:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:29.766 05:00:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:29.766 05:00:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.023 05:00:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:30.292 [ 00:18:30.292 { 00:18:30.292 "name": "BaseBdev3", 00:18:30.292 "aliases": [ 00:18:30.292 "c0453c22-5693-43b4-a4da-6e0519ed029a" 00:18:30.292 ], 00:18:30.292 "product_name": "Malloc disk", 00:18:30.292 "block_size": 512, 00:18:30.292 "num_blocks": 65536, 00:18:30.292 "uuid": "c0453c22-5693-43b4-a4da-6e0519ed029a", 00:18:30.292 "assigned_rate_limits": { 00:18:30.292 "rw_ios_per_sec": 0, 00:18:30.292 "rw_mbytes_per_sec": 0, 00:18:30.292 "r_mbytes_per_sec": 0, 00:18:30.292 "w_mbytes_per_sec": 0 00:18:30.292 }, 00:18:30.292 "claimed": true, 00:18:30.292 "claim_type": "exclusive_write", 00:18:30.292 "zoned": false, 00:18:30.292 "supported_io_types": { 00:18:30.292 "read": true, 00:18:30.292 "write": true, 00:18:30.292 "unmap": true, 00:18:30.292 "write_zeroes": true, 00:18:30.292 "flush": true, 00:18:30.292 "reset": true, 00:18:30.292 "compare": false, 00:18:30.292 "compare_and_write": false, 00:18:30.292 "abort": true, 00:18:30.293 "nvme_admin": false, 00:18:30.293 "nvme_io": false 00:18:30.293 }, 00:18:30.293 "memory_domains": [ 00:18:30.293 { 00:18:30.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.293 "dma_device_type": 2 00:18:30.293 } 00:18:30.293 ], 00:18:30.293 "driver_specific": {} 00:18:30.293 } 00:18:30.293 ] 00:18:30.552 05:01:00 -- common/autotest_common.sh@895 -- # return 0 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.552 05:01:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.809 05:01:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.809 "name": "Existed_Raid", 00:18:30.809 "uuid": "2c46615c-b87f-49fe-ad32-5ce3e60a64c4", 00:18:30.809 "strip_size_kb": 0, 00:18:30.809 "state": "online", 00:18:30.809 "raid_level": "raid1", 00:18:30.809 "superblock": true, 00:18:30.809 "num_base_bdevs": 3, 00:18:30.809 "num_base_bdevs_discovered": 3, 00:18:30.809 "num_base_bdevs_operational": 3, 00:18:30.809 "base_bdevs_list": [ 00:18:30.809 { 00:18:30.809 "name": "BaseBdev1", 00:18:30.809 "uuid": "088aada1-9a72-40d6-bd01-217ca3d003a1", 00:18:30.809 "is_configured": true, 00:18:30.809 "data_offset": 2048, 00:18:30.809 "data_size": 63488 00:18:30.809 }, 00:18:30.809 { 00:18:30.809 "name": "BaseBdev2", 00:18:30.809 "uuid": "30667bb0-c4ac-4c52-bed4-e40f8e21de8b", 00:18:30.809 "is_configured": true, 00:18:30.810 "data_offset": 2048, 00:18:30.810 "data_size": 63488 00:18:30.810 }, 00:18:30.810 { 00:18:30.810 "name": "BaseBdev3", 00:18:30.810 "uuid": "c0453c22-5693-43b4-a4da-6e0519ed029a", 00:18:30.810 "is_configured": true, 00:18:30.810 "data_offset": 2048, 00:18:30.810 "data_size": 63488 00:18:30.810 } 00:18:30.810 ] 00:18:30.810 }' 00:18:30.810 05:01:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.810 05:01:00 -- common/autotest_common.sh@10 -- # set +x 00:18:31.375 05:01:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:31.633 [2024-04-27 05:01:01.336232] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.633 05:01:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.891 05:01:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.891 "name": "Existed_Raid", 00:18:31.891 "uuid": "2c46615c-b87f-49fe-ad32-5ce3e60a64c4", 00:18:31.891 "strip_size_kb": 0, 00:18:31.891 "state": "online", 00:18:31.891 "raid_level": "raid1", 00:18:31.891 "superblock": true, 00:18:31.891 "num_base_bdevs": 3, 00:18:31.891 "num_base_bdevs_discovered": 2, 00:18:31.891 "num_base_bdevs_operational": 2, 00:18:31.891 "base_bdevs_list": [ 00:18:31.891 { 00:18:31.891 "name": null, 00:18:31.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.891 "is_configured": false, 00:18:31.891 "data_offset": 2048, 00:18:31.891 "data_size": 63488 00:18:31.891 }, 00:18:31.891 { 00:18:31.891 "name": "BaseBdev2", 00:18:31.891 "uuid": "30667bb0-c4ac-4c52-bed4-e40f8e21de8b", 00:18:31.891 "is_configured": true, 00:18:31.891 "data_offset": 2048, 00:18:31.891 "data_size": 63488 00:18:31.891 }, 00:18:31.891 { 00:18:31.891 "name": "BaseBdev3", 00:18:31.891 "uuid": "c0453c22-5693-43b4-a4da-6e0519ed029a", 00:18:31.891 "is_configured": true, 00:18:31.891 "data_offset": 2048, 00:18:31.891 "data_size": 63488 00:18:31.891 } 00:18:31.891 ] 00:18:31.891 }' 00:18:31.891 05:01:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.891 05:01:01 -- common/autotest_common.sh@10 -- # set +x 00:18:32.456 05:01:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:32.456 05:01:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.713 05:01:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.713 05:01:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.713 05:01:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.713 05:01:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.713 05:01:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:32.970 [2024-04-27 05:01:02.808295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:32.970 05:01:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.970 05:01:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.970 05:01:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.970 05:01:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.226 05:01:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:33.226 05:01:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.226 05:01:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:33.484 [2024-04-27 05:01:03.329394] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:33.484 [2024-04-27 05:01:03.329757] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.484 [2024-04-27 05:01:03.329978] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.484 [2024-04-27 05:01:03.357404] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.484 [2024-04-27 05:01:03.357755] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:33.484 05:01:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:33.484 05:01:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:33.484 05:01:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.741 05:01:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.998 05:01:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:33.998 05:01:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:33.998 05:01:03 -- bdev/bdev_raid.sh@287 -- # killprocess 129686 00:18:33.998 05:01:03 -- common/autotest_common.sh@926 -- # '[' -z 129686 ']' 00:18:33.998 05:01:03 -- common/autotest_common.sh@930 -- # kill -0 129686 00:18:33.998 05:01:03 -- common/autotest_common.sh@931 -- # uname 00:18:33.998 05:01:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:33.998 05:01:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129686 00:18:33.998 killing process with pid 129686 00:18:33.998 05:01:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:33.998 05:01:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:33.998 05:01:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129686' 00:18:33.998 05:01:03 -- common/autotest_common.sh@945 -- # kill 129686 00:18:33.998 [2024-04-27 05:01:03.680728] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.998 05:01:03 -- common/autotest_common.sh@950 -- # wait 129686 00:18:33.998 [2024-04-27 05:01:03.680848] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:34.256 00:18:34.256 real 0m13.299s 00:18:34.256 user 0m24.258s 00:18:34.256 sys 0m1.697s 00:18:34.256 05:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.256 05:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.256 ************************************ 00:18:34.256 END TEST raid_state_function_test_sb 00:18:34.256 ************************************ 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:34.256 05:01:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:34.256 05:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:34.256 05:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.256 ************************************ 00:18:34.256 START TEST raid_superblock_test 00:18:34.256 ************************************ 00:18:34.256 05:01:04 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@357 -- # raid_pid=130079 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:34.256 05:01:04 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130079 /var/tmp/spdk-raid.sock 00:18:34.256 05:01:04 -- common/autotest_common.sh@819 -- # '[' -z 130079 ']' 00:18:34.256 05:01:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:34.256 05:01:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:34.256 05:01:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:34.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:34.256 05:01:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:34.256 05:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.513 [2024-04-27 05:01:04.174202] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:34.513 [2024-04-27 05:01:04.174690] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130079 ] 00:18:34.513 [2024-04-27 05:01:04.331314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.773 [2024-04-27 05:01:04.452162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.773 [2024-04-27 05:01:04.531576] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.347 05:01:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:35.347 05:01:05 -- common/autotest_common.sh@852 -- # return 0 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.347 05:01:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:35.604 malloc1 00:18:35.604 05:01:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.861 [2024-04-27 05:01:05.648119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.861 [2024-04-27 05:01:05.648619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.861 [2024-04-27 05:01:05.648827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:35.861 [2024-04-27 05:01:05.649017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.861 [2024-04-27 05:01:05.652073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.861 [2024-04-27 05:01:05.652271] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.861 pt1 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.861 05:01:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:36.119 malloc2 00:18:36.119 05:01:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.377 [2024-04-27 05:01:06.128378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.377 [2024-04-27 05:01:06.128718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.377 [2024-04-27 05:01:06.128823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:36.377 [2024-04-27 05:01:06.129029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.377 [2024-04-27 05:01:06.131999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.377 [2024-04-27 05:01:06.132180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.377 pt2 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.377 05:01:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:36.635 malloc3 00:18:36.635 05:01:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:36.893 [2024-04-27 05:01:06.673328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:36.893 [2024-04-27 05:01:06.673707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.893 [2024-04-27 05:01:06.673821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:36.893 [2024-04-27 05:01:06.674116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.893 [2024-04-27 05:01:06.677101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.893 [2024-04-27 05:01:06.677292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:36.893 pt3 00:18:36.893 05:01:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.893 05:01:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.893 05:01:06 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:37.151 [2024-04-27 05:01:06.909849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.151 [2024-04-27 05:01:06.912706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.151 [2024-04-27 05:01:06.912916] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.151 [2024-04-27 05:01:06.913262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:37.151 [2024-04-27 05:01:06.913391] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.151 [2024-04-27 05:01:06.913627] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:37.151 [2024-04-27 05:01:06.914245] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:37.151 [2024-04-27 05:01:06.914383] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:37.151 [2024-04-27 05:01:06.914741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.151 05:01:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.409 05:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.409 "name": "raid_bdev1", 00:18:37.409 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:37.409 "strip_size_kb": 0, 00:18:37.409 "state": "online", 00:18:37.409 "raid_level": "raid1", 00:18:37.409 "superblock": true, 00:18:37.409 "num_base_bdevs": 3, 00:18:37.409 "num_base_bdevs_discovered": 3, 00:18:37.409 "num_base_bdevs_operational": 3, 00:18:37.409 "base_bdevs_list": [ 00:18:37.409 { 00:18:37.409 "name": "pt1", 00:18:37.409 "uuid": "b4d043bf-acda-5dcf-9f21-290968db5f81", 00:18:37.409 "is_configured": true, 00:18:37.409 "data_offset": 2048, 00:18:37.409 "data_size": 63488 00:18:37.409 }, 00:18:37.409 { 00:18:37.409 "name": "pt2", 00:18:37.409 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:37.409 "is_configured": true, 00:18:37.409 "data_offset": 2048, 00:18:37.409 "data_size": 63488 00:18:37.409 }, 00:18:37.409 { 00:18:37.409 "name": "pt3", 00:18:37.409 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:37.410 "is_configured": true, 00:18:37.410 "data_offset": 2048, 00:18:37.410 "data_size": 63488 00:18:37.410 } 00:18:37.410 ] 00:18:37.410 }' 00:18:37.410 05:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.410 05:01:07 -- common/autotest_common.sh@10 -- # set +x 00:18:37.976 05:01:07 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.976 05:01:07 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:38.234 [2024-04-27 05:01:08.115290] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.492 05:01:08 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f7a6a33d-239a-4f7d-b4e9-38df9370697e 00:18:38.492 05:01:08 -- bdev/bdev_raid.sh@380 -- # '[' -z f7a6a33d-239a-4f7d-b4e9-38df9370697e ']' 00:18:38.492 05:01:08 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.492 [2024-04-27 05:01:08.371062] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.492 [2024-04-27 05:01:08.371290] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.492 [2024-04-27 05:01:08.371541] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.492 [2024-04-27 05:01:08.371801] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.492 [2024-04-27 05:01:08.371932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:38.749 05:01:08 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.749 05:01:08 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:39.007 05:01:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:39.007 05:01:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:39.007 05:01:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.007 05:01:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:39.265 05:01:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.265 05:01:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:39.265 05:01:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.265 05:01:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:39.522 05:01:09 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.522 05:01:09 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.780 05:01:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:39.780 05:01:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:39.780 05:01:09 -- common/autotest_common.sh@640 -- # local es=0 00:18:39.780 05:01:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:39.780 05:01:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.780 05:01:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.780 05:01:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.780 05:01:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.780 05:01:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.780 05:01:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.780 05:01:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.780 05:01:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:39.780 05:01:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:40.038 [2024-04-27 05:01:09.859401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.038 [2024-04-27 05:01:09.862233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.038 [2024-04-27 05:01:09.862451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:40.038 [2024-04-27 05:01:09.862571] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:40.038 [2024-04-27 05:01:09.862874] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:40.038 [2024-04-27 05:01:09.863072] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:40.038 [2024-04-27 05:01:09.863251] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.038 [2024-04-27 05:01:09.863370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:18:40.038 request: 00:18:40.038 { 00:18:40.038 "name": "raid_bdev1", 00:18:40.038 "raid_level": "raid1", 00:18:40.038 "base_bdevs": [ 00:18:40.038 "malloc1", 00:18:40.038 "malloc2", 00:18:40.038 "malloc3" 00:18:40.038 ], 00:18:40.038 "superblock": false, 00:18:40.038 "method": "bdev_raid_create", 00:18:40.038 "req_id": 1 00:18:40.038 } 00:18:40.038 Got JSON-RPC error response 00:18:40.038 response: 00:18:40.038 { 00:18:40.038 "code": -17, 00:18:40.038 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.038 } 00:18:40.038 05:01:09 -- common/autotest_common.sh@643 -- # es=1 00:18:40.038 05:01:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:40.038 05:01:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:40.038 05:01:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:40.038 05:01:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.038 05:01:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:40.295 05:01:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:40.295 05:01:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:40.295 05:01:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.570 [2024-04-27 05:01:10.339914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.570 [2024-04-27 05:01:10.340353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.570 [2024-04-27 05:01:10.340449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:40.570 [2024-04-27 05:01:10.340704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.570 [2024-04-27 05:01:10.343606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.570 [2024-04-27 05:01:10.343798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.570 [2024-04-27 05:01:10.344055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:40.570 [2024-04-27 05:01:10.344238] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.570 pt1 00:18:40.570 05:01:10 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:40.570 05:01:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.571 05:01:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.835 05:01:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.835 "name": "raid_bdev1", 00:18:40.835 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:40.835 "strip_size_kb": 0, 00:18:40.835 "state": "configuring", 00:18:40.835 "raid_level": "raid1", 00:18:40.835 "superblock": true, 00:18:40.835 "num_base_bdevs": 3, 00:18:40.835 "num_base_bdevs_discovered": 1, 00:18:40.835 "num_base_bdevs_operational": 3, 00:18:40.835 "base_bdevs_list": [ 00:18:40.835 { 00:18:40.835 "name": "pt1", 00:18:40.835 "uuid": "b4d043bf-acda-5dcf-9f21-290968db5f81", 00:18:40.835 "is_configured": true, 00:18:40.835 "data_offset": 2048, 00:18:40.835 "data_size": 63488 00:18:40.835 }, 00:18:40.835 { 00:18:40.835 "name": null, 00:18:40.835 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:40.835 "is_configured": false, 00:18:40.835 "data_offset": 2048, 00:18:40.835 "data_size": 63488 00:18:40.835 }, 00:18:40.835 { 00:18:40.835 "name": null, 00:18:40.835 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:40.835 "is_configured": false, 00:18:40.836 "data_offset": 2048, 00:18:40.836 "data_size": 63488 00:18:40.836 } 00:18:40.836 ] 00:18:40.836 }' 00:18:40.836 05:01:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.836 05:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:41.400 05:01:11 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:41.400 05:01:11 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.657 [2024-04-27 05:01:11.444497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.657 [2024-04-27 05:01:11.444967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.657 [2024-04-27 05:01:11.445079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:41.657 [2024-04-27 05:01:11.445305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.657 [2024-04-27 05:01:11.445912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.657 [2024-04-27 05:01:11.446084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.657 [2024-04-27 05:01:11.446376] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.657 [2024-04-27 05:01:11.446516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.657 pt2 00:18:41.657 05:01:11 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:41.915 [2024-04-27 05:01:11.716642] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.915 05:01:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.173 05:01:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.173 "name": "raid_bdev1", 00:18:42.173 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:42.173 "strip_size_kb": 0, 00:18:42.173 "state": "configuring", 00:18:42.173 "raid_level": "raid1", 00:18:42.173 "superblock": true, 00:18:42.173 "num_base_bdevs": 3, 00:18:42.173 "num_base_bdevs_discovered": 1, 00:18:42.173 "num_base_bdevs_operational": 3, 00:18:42.173 "base_bdevs_list": [ 00:18:42.173 { 00:18:42.173 "name": "pt1", 00:18:42.173 "uuid": "b4d043bf-acda-5dcf-9f21-290968db5f81", 00:18:42.173 "is_configured": true, 00:18:42.173 "data_offset": 2048, 00:18:42.173 "data_size": 63488 00:18:42.173 }, 00:18:42.173 { 00:18:42.173 "name": null, 00:18:42.173 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:42.173 "is_configured": false, 00:18:42.173 "data_offset": 2048, 00:18:42.173 "data_size": 63488 00:18:42.173 }, 00:18:42.173 { 00:18:42.173 "name": null, 00:18:42.173 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:42.173 "is_configured": false, 00:18:42.173 "data_offset": 2048, 00:18:42.173 "data_size": 63488 00:18:42.173 } 00:18:42.173 ] 00:18:42.173 }' 00:18:42.173 05:01:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.173 05:01:11 -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 05:01:12 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:42.739 05:01:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.739 05:01:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.997 [2024-04-27 05:01:12.848822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.997 [2024-04-27 05:01:12.849268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.997 [2024-04-27 05:01:12.849363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:42.997 [2024-04-27 05:01:12.849627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.997 [2024-04-27 05:01:12.850272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.997 [2024-04-27 05:01:12.850440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.997 [2024-04-27 05:01:12.850691] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:42.997 [2024-04-27 05:01:12.850831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.997 pt2 00:18:42.997 05:01:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.997 05:01:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.997 05:01:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.254 [2024-04-27 05:01:13.116929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.254 [2024-04-27 05:01:13.117321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.254 [2024-04-27 05:01:13.117415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:43.254 [2024-04-27 05:01:13.117661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.254 [2024-04-27 05:01:13.118305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.254 [2024-04-27 05:01:13.118475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.254 [2024-04-27 05:01:13.118735] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:43.254 [2024-04-27 05:01:13.118891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.254 [2024-04-27 05:01:13.119127] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:43.254 [2024-04-27 05:01:13.119248] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.254 [2024-04-27 05:01:13.119384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:43.254 [2024-04-27 05:01:13.119881] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:43.254 [2024-04-27 05:01:13.120019] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:43.254 [2024-04-27 05:01:13.120254] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.254 pt3 00:18:43.254 05:01:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.254 05:01:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.255 05:01:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.512 05:01:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.512 "name": "raid_bdev1", 00:18:43.512 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:43.512 "strip_size_kb": 0, 00:18:43.512 "state": "online", 00:18:43.512 "raid_level": "raid1", 00:18:43.512 "superblock": true, 00:18:43.512 "num_base_bdevs": 3, 00:18:43.512 "num_base_bdevs_discovered": 3, 00:18:43.512 "num_base_bdevs_operational": 3, 00:18:43.512 "base_bdevs_list": [ 00:18:43.512 { 00:18:43.512 "name": "pt1", 00:18:43.512 "uuid": "b4d043bf-acda-5dcf-9f21-290968db5f81", 00:18:43.512 "is_configured": true, 00:18:43.512 "data_offset": 2048, 00:18:43.512 "data_size": 63488 00:18:43.512 }, 00:18:43.512 { 00:18:43.512 "name": "pt2", 00:18:43.512 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:43.512 "is_configured": true, 00:18:43.512 "data_offset": 2048, 00:18:43.512 "data_size": 63488 00:18:43.512 }, 00:18:43.512 { 00:18:43.512 "name": "pt3", 00:18:43.512 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:43.512 "is_configured": true, 00:18:43.512 "data_offset": 2048, 00:18:43.512 "data_size": 63488 00:18:43.512 } 00:18:43.512 ] 00:18:43.512 }' 00:18:43.512 05:01:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.512 05:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.456 [2024-04-27 05:01:14.313250] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@430 -- # '[' f7a6a33d-239a-4f7d-b4e9-38df9370697e '!=' f7a6a33d-239a-4f7d-b4e9-38df9370697e ']' 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:44.456 05:01:14 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:44.719 [2024-04-27 05:01:14.609134] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.977 "name": "raid_bdev1", 00:18:44.977 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:44.977 "strip_size_kb": 0, 00:18:44.977 "state": "online", 00:18:44.977 "raid_level": "raid1", 00:18:44.977 "superblock": true, 00:18:44.977 "num_base_bdevs": 3, 00:18:44.977 "num_base_bdevs_discovered": 2, 00:18:44.977 "num_base_bdevs_operational": 2, 00:18:44.977 "base_bdevs_list": [ 00:18:44.977 { 00:18:44.977 "name": null, 00:18:44.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.977 "is_configured": false, 00:18:44.977 "data_offset": 2048, 00:18:44.977 "data_size": 63488 00:18:44.977 }, 00:18:44.977 { 00:18:44.977 "name": "pt2", 00:18:44.977 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:44.977 "is_configured": true, 00:18:44.977 "data_offset": 2048, 00:18:44.977 "data_size": 63488 00:18:44.977 }, 00:18:44.977 { 00:18:44.977 "name": "pt3", 00:18:44.977 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:44.977 "is_configured": true, 00:18:44.977 "data_offset": 2048, 00:18:44.977 "data_size": 63488 00:18:44.977 } 00:18:44.977 ] 00:18:44.977 }' 00:18:44.977 05:01:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.977 05:01:14 -- common/autotest_common.sh@10 -- # set +x 00:18:45.909 05:01:15 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:45.909 [2024-04-27 05:01:15.757308] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.909 [2024-04-27 05:01:15.757645] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.909 [2024-04-27 05:01:15.757859] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.909 [2024-04-27 05:01:15.758075] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.909 [2024-04-27 05:01:15.758216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:45.909 05:01:15 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:45.909 05:01:15 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.167 05:01:16 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:46.167 05:01:16 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:46.167 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:46.167 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.167 05:01:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.425 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:46.425 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.425 05:01:16 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:46.682 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:46.682 05:01:16 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.682 05:01:16 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:46.682 05:01:16 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:46.682 05:01:16 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.940 [2024-04-27 05:01:16.713531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.940 [2024-04-27 05:01:16.713863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.940 [2024-04-27 05:01:16.713961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:46.940 [2024-04-27 05:01:16.714195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.940 [2024-04-27 05:01:16.717130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.940 [2024-04-27 05:01:16.717315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.940 [2024-04-27 05:01:16.717562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:46.940 [2024-04-27 05:01:16.717708] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.940 pt2 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.940 05:01:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.199 05:01:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.199 "name": "raid_bdev1", 00:18:47.199 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:47.199 "strip_size_kb": 0, 00:18:47.199 "state": "configuring", 00:18:47.199 "raid_level": "raid1", 00:18:47.199 "superblock": true, 00:18:47.199 "num_base_bdevs": 3, 00:18:47.199 "num_base_bdevs_discovered": 1, 00:18:47.199 "num_base_bdevs_operational": 2, 00:18:47.199 "base_bdevs_list": [ 00:18:47.199 { 00:18:47.199 "name": null, 00:18:47.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.199 "is_configured": false, 00:18:47.199 "data_offset": 2048, 00:18:47.199 "data_size": 63488 00:18:47.199 }, 00:18:47.199 { 00:18:47.199 "name": "pt2", 00:18:47.199 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:47.199 "is_configured": true, 00:18:47.199 "data_offset": 2048, 00:18:47.199 "data_size": 63488 00:18:47.199 }, 00:18:47.199 { 00:18:47.199 "name": null, 00:18:47.199 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:47.199 "is_configured": false, 00:18:47.199 "data_offset": 2048, 00:18:47.199 "data_size": 63488 00:18:47.199 } 00:18:47.199 ] 00:18:47.199 }' 00:18:47.199 05:01:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.199 05:01:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.765 05:01:17 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:47.765 05:01:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:47.765 05:01:17 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:47.765 05:01:17 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:48.023 [2024-04-27 05:01:17.865990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:48.023 [2024-04-27 05:01:17.866381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.023 [2024-04-27 05:01:17.866486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:48.023 [2024-04-27 05:01:17.866733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.023 [2024-04-27 05:01:17.867355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.023 [2024-04-27 05:01:17.867529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:48.023 [2024-04-27 05:01:17.867786] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:48.023 [2024-04-27 05:01:17.867938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:48.023 [2024-04-27 05:01:17.868128] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:18:48.023 [2024-04-27 05:01:17.868255] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:48.023 [2024-04-27 05:01:17.868390] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:48.023 [2024-04-27 05:01:17.868921] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:18:48.023 [2024-04-27 05:01:17.869057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:18:48.023 [2024-04-27 05:01:17.869286] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.023 pt3 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.023 05:01:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.280 05:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.280 "name": "raid_bdev1", 00:18:48.280 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:48.280 "strip_size_kb": 0, 00:18:48.280 "state": "online", 00:18:48.280 "raid_level": "raid1", 00:18:48.280 "superblock": true, 00:18:48.280 "num_base_bdevs": 3, 00:18:48.280 "num_base_bdevs_discovered": 2, 00:18:48.280 "num_base_bdevs_operational": 2, 00:18:48.280 "base_bdevs_list": [ 00:18:48.280 { 00:18:48.280 "name": null, 00:18:48.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.280 "is_configured": false, 00:18:48.280 "data_offset": 2048, 00:18:48.280 "data_size": 63488 00:18:48.280 }, 00:18:48.280 { 00:18:48.280 "name": "pt2", 00:18:48.280 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:48.280 "is_configured": true, 00:18:48.280 "data_offset": 2048, 00:18:48.280 "data_size": 63488 00:18:48.280 }, 00:18:48.280 { 00:18:48.280 "name": "pt3", 00:18:48.280 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:48.280 "is_configured": true, 00:18:48.280 "data_offset": 2048, 00:18:48.280 "data_size": 63488 00:18:48.280 } 00:18:48.280 ] 00:18:48.280 }' 00:18:48.280 05:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.280 05:01:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.898 05:01:18 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:48.898 05:01:18 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.157 [2024-04-27 05:01:19.002249] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.157 [2024-04-27 05:01:19.002482] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.157 [2024-04-27 05:01:19.002701] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.157 [2024-04-27 05:01:19.002902] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.157 [2024-04-27 05:01:19.003028] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:18:49.157 05:01:19 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.157 05:01:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:49.416 05:01:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:49.416 05:01:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:49.416 05:01:19 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.673 [2024-04-27 05:01:19.526415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.673 [2024-04-27 05:01:19.526798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.673 [2024-04-27 05:01:19.526900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:49.673 [2024-04-27 05:01:19.527155] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.673 [2024-04-27 05:01:19.530138] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.673 [2024-04-27 05:01:19.530328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.673 [2024-04-27 05:01:19.530579] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:49.673 [2024-04-27 05:01:19.530740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:49.673 pt1 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.673 05:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.674 05:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.932 05:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.932 "name": "raid_bdev1", 00:18:49.932 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:49.932 "strip_size_kb": 0, 00:18:49.932 "state": "configuring", 00:18:49.932 "raid_level": "raid1", 00:18:49.932 "superblock": true, 00:18:49.932 "num_base_bdevs": 3, 00:18:49.932 "num_base_bdevs_discovered": 1, 00:18:49.932 "num_base_bdevs_operational": 3, 00:18:49.932 "base_bdevs_list": [ 00:18:49.932 { 00:18:49.932 "name": "pt1", 00:18:49.932 "uuid": "b4d043bf-acda-5dcf-9f21-290968db5f81", 00:18:49.932 "is_configured": true, 00:18:49.932 "data_offset": 2048, 00:18:49.932 "data_size": 63488 00:18:49.932 }, 00:18:49.932 { 00:18:49.932 "name": null, 00:18:49.932 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:49.932 "is_configured": false, 00:18:49.932 "data_offset": 2048, 00:18:49.932 "data_size": 63488 00:18:49.932 }, 00:18:49.932 { 00:18:49.932 "name": null, 00:18:49.932 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:49.932 "is_configured": false, 00:18:49.932 "data_offset": 2048, 00:18:49.932 "data_size": 63488 00:18:49.932 } 00:18:49.932 ] 00:18:49.932 }' 00:18:49.932 05:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.932 05:01:19 -- common/autotest_common.sh@10 -- # set +x 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:50.864 05:01:20 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:51.430 [2024-04-27 05:01:21.255180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:51.430 [2024-04-27 05:01:21.255515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.430 [2024-04-27 05:01:21.255612] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:51.430 [2024-04-27 05:01:21.255881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.430 [2024-04-27 05:01:21.256605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.430 [2024-04-27 05:01:21.256787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:51.430 [2024-04-27 05:01:21.257054] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:51.430 [2024-04-27 05:01:21.257179] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:51.430 [2024-04-27 05:01:21.257286] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.430 [2024-04-27 05:01:21.257351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:18:51.430 [2024-04-27 05:01:21.257519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:51.430 pt3 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.430 05:01:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.688 05:01:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.688 "name": "raid_bdev1", 00:18:51.688 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:51.688 "strip_size_kb": 0, 00:18:51.688 "state": "configuring", 00:18:51.688 "raid_level": "raid1", 00:18:51.688 "superblock": true, 00:18:51.688 "num_base_bdevs": 3, 00:18:51.688 "num_base_bdevs_discovered": 1, 00:18:51.688 "num_base_bdevs_operational": 2, 00:18:51.688 "base_bdevs_list": [ 00:18:51.688 { 00:18:51.688 "name": null, 00:18:51.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.688 "is_configured": false, 00:18:51.688 "data_offset": 2048, 00:18:51.688 "data_size": 63488 00:18:51.688 }, 00:18:51.688 { 00:18:51.688 "name": null, 00:18:51.688 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:51.689 "is_configured": false, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 }, 00:18:51.689 { 00:18:51.689 "name": "pt3", 00:18:51.689 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:51.689 "is_configured": true, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 } 00:18:51.689 ] 00:18:51.689 }' 00:18:51.689 05:01:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.689 05:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:52.623 [2024-04-27 05:01:22.411490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:52.623 [2024-04-27 05:01:22.411813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.623 [2024-04-27 05:01:22.411907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:52.623 [2024-04-27 05:01:22.412157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.623 [2024-04-27 05:01:22.412805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.623 [2024-04-27 05:01:22.412974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:52.623 [2024-04-27 05:01:22.413197] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:52.623 [2024-04-27 05:01:22.413332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.623 [2024-04-27 05:01:22.413531] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:18:52.623 [2024-04-27 05:01:22.413648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:52.623 [2024-04-27 05:01:22.413802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:52.623 [2024-04-27 05:01:22.414300] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:18:52.623 [2024-04-27 05:01:22.414442] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:18:52.623 [2024-04-27 05:01:22.414673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.623 pt2 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.623 05:01:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.881 05:01:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.881 "name": "raid_bdev1", 00:18:52.881 "uuid": "f7a6a33d-239a-4f7d-b4e9-38df9370697e", 00:18:52.881 "strip_size_kb": 0, 00:18:52.881 "state": "online", 00:18:52.881 "raid_level": "raid1", 00:18:52.881 "superblock": true, 00:18:52.881 "num_base_bdevs": 3, 00:18:52.881 "num_base_bdevs_discovered": 2, 00:18:52.881 "num_base_bdevs_operational": 2, 00:18:52.881 "base_bdevs_list": [ 00:18:52.881 { 00:18:52.881 "name": null, 00:18:52.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.881 "is_configured": false, 00:18:52.881 "data_offset": 2048, 00:18:52.881 "data_size": 63488 00:18:52.881 }, 00:18:52.881 { 00:18:52.881 "name": "pt2", 00:18:52.881 "uuid": "d58f4c64-3bc5-54df-8c56-50c892e8127f", 00:18:52.881 "is_configured": true, 00:18:52.881 "data_offset": 2048, 00:18:52.881 "data_size": 63488 00:18:52.881 }, 00:18:52.881 { 00:18:52.881 "name": "pt3", 00:18:52.881 "uuid": "a9ca9f65-05ae-5beb-90db-7dc5a5650eab", 00:18:52.881 "is_configured": true, 00:18:52.881 "data_offset": 2048, 00:18:52.881 "data_size": 63488 00:18:52.881 } 00:18:52.881 ] 00:18:52.881 }' 00:18:52.881 05:01:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.881 05:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:53.446 05:01:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:53.446 05:01:23 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:53.704 [2024-04-27 05:01:23.532014] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.704 05:01:23 -- bdev/bdev_raid.sh@506 -- # '[' f7a6a33d-239a-4f7d-b4e9-38df9370697e '!=' f7a6a33d-239a-4f7d-b4e9-38df9370697e ']' 00:18:53.704 05:01:23 -- bdev/bdev_raid.sh@511 -- # killprocess 130079 00:18:53.704 05:01:23 -- common/autotest_common.sh@926 -- # '[' -z 130079 ']' 00:18:53.704 05:01:23 -- common/autotest_common.sh@930 -- # kill -0 130079 00:18:53.704 05:01:23 -- common/autotest_common.sh@931 -- # uname 00:18:53.704 05:01:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:53.704 05:01:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130079 00:18:53.704 killing process with pid 130079 00:18:53.704 05:01:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:53.704 05:01:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:53.704 05:01:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130079' 00:18:53.704 05:01:23 -- common/autotest_common.sh@945 -- # kill 130079 00:18:53.704 [2024-04-27 05:01:23.577828] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.704 05:01:23 -- common/autotest_common.sh@950 -- # wait 130079 00:18:53.704 [2024-04-27 05:01:23.577964] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.704 [2024-04-27 05:01:23.578046] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.704 [2024-04-27 05:01:23.578059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:18:53.961 [2024-04-27 05:01:23.645498] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:54.218 00:18:54.218 real 0m19.921s 00:18:54.218 user 0m37.193s 00:18:54.218 ************************************ 00:18:54.218 END TEST raid_superblock_test 00:18:54.218 ************************************ 00:18:54.218 sys 0m2.441s 00:18:54.218 05:01:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.218 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:54.218 05:01:24 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:54.218 05:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:54.218 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.218 ************************************ 00:18:54.218 START TEST raid_state_function_test 00:18:54.218 ************************************ 00:18:54.218 05:01:24 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:54.218 05:01:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=130688 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130688' 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:54.219 Process raid pid: 130688 00:18:54.219 05:01:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130688 /var/tmp/spdk-raid.sock 00:18:54.219 05:01:24 -- common/autotest_common.sh@819 -- # '[' -z 130688 ']' 00:18:54.219 05:01:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:54.219 05:01:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:54.219 05:01:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:54.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:54.219 05:01:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:54.219 05:01:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.476 [2024-04-27 05:01:24.171211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:54.476 [2024-04-27 05:01:24.171767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.476 [2024-04-27 05:01:24.341284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.827 [2024-04-27 05:01:24.464769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.827 [2024-04-27 05:01:24.546071] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.391 05:01:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.391 05:01:25 -- common/autotest_common.sh@852 -- # return 0 00:18:55.392 05:01:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:55.649 [2024-04-27 05:01:25.367896] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.649 [2024-04-27 05:01:25.368222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.649 [2024-04-27 05:01:25.368348] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.649 [2024-04-27 05:01:25.368501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.649 [2024-04-27 05:01:25.368632] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.649 [2024-04-27 05:01:25.368726] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.649 [2024-04-27 05:01:25.368765] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:55.649 [2024-04-27 05:01:25.368829] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:55.649 05:01:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:55.649 05:01:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.649 05:01:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.650 05:01:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.907 05:01:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.907 "name": "Existed_Raid", 00:18:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.907 "strip_size_kb": 64, 00:18:55.907 "state": "configuring", 00:18:55.907 "raid_level": "raid0", 00:18:55.907 "superblock": false, 00:18:55.907 "num_base_bdevs": 4, 00:18:55.907 "num_base_bdevs_discovered": 0, 00:18:55.907 "num_base_bdevs_operational": 4, 00:18:55.907 "base_bdevs_list": [ 00:18:55.907 { 00:18:55.907 "name": "BaseBdev1", 00:18:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.907 "is_configured": false, 00:18:55.907 "data_offset": 0, 00:18:55.907 "data_size": 0 00:18:55.907 }, 00:18:55.907 { 00:18:55.907 "name": "BaseBdev2", 00:18:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.907 "is_configured": false, 00:18:55.907 "data_offset": 0, 00:18:55.907 "data_size": 0 00:18:55.907 }, 00:18:55.907 { 00:18:55.907 "name": "BaseBdev3", 00:18:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.907 "is_configured": false, 00:18:55.907 "data_offset": 0, 00:18:55.907 "data_size": 0 00:18:55.907 }, 00:18:55.907 { 00:18:55.907 "name": "BaseBdev4", 00:18:55.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.907 "is_configured": false, 00:18:55.907 "data_offset": 0, 00:18:55.907 "data_size": 0 00:18:55.907 } 00:18:55.907 ] 00:18:55.907 }' 00:18:55.907 05:01:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.907 05:01:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.472 05:01:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:56.731 [2024-04-27 05:01:26.531965] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.731 [2024-04-27 05:01:26.532301] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:56.731 05:01:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:56.990 [2024-04-27 05:01:26.776066] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.990 [2024-04-27 05:01:26.776464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.990 [2024-04-27 05:01:26.776611] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.990 [2024-04-27 05:01:26.776757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.990 [2024-04-27 05:01:26.776863] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.990 [2024-04-27 05:01:26.776952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.990 [2024-04-27 05:01:26.776988] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.990 [2024-04-27 05:01:26.777041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.990 05:01:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:57.248 [2024-04-27 05:01:27.033023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.248 BaseBdev1 00:18:57.248 05:01:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:57.248 05:01:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:57.248 05:01:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:57.248 05:01:27 -- common/autotest_common.sh@889 -- # local i 00:18:57.248 05:01:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:57.248 05:01:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:57.248 05:01:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.505 05:01:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.763 [ 00:18:57.763 { 00:18:57.763 "name": "BaseBdev1", 00:18:57.763 "aliases": [ 00:18:57.763 "5d550cd2-cc23-4322-ae08-344f14ac48d1" 00:18:57.763 ], 00:18:57.763 "product_name": "Malloc disk", 00:18:57.763 "block_size": 512, 00:18:57.763 "num_blocks": 65536, 00:18:57.763 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:18:57.763 "assigned_rate_limits": { 00:18:57.763 "rw_ios_per_sec": 0, 00:18:57.763 "rw_mbytes_per_sec": 0, 00:18:57.763 "r_mbytes_per_sec": 0, 00:18:57.763 "w_mbytes_per_sec": 0 00:18:57.763 }, 00:18:57.763 "claimed": true, 00:18:57.763 "claim_type": "exclusive_write", 00:18:57.763 "zoned": false, 00:18:57.763 "supported_io_types": { 00:18:57.763 "read": true, 00:18:57.763 "write": true, 00:18:57.763 "unmap": true, 00:18:57.763 "write_zeroes": true, 00:18:57.763 "flush": true, 00:18:57.763 "reset": true, 00:18:57.763 "compare": false, 00:18:57.763 "compare_and_write": false, 00:18:57.763 "abort": true, 00:18:57.763 "nvme_admin": false, 00:18:57.763 "nvme_io": false 00:18:57.763 }, 00:18:57.763 "memory_domains": [ 00:18:57.763 { 00:18:57.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.763 "dma_device_type": 2 00:18:57.763 } 00:18:57.763 ], 00:18:57.763 "driver_specific": {} 00:18:57.763 } 00:18:57.763 ] 00:18:57.763 05:01:27 -- common/autotest_common.sh@895 -- # return 0 00:18:57.763 05:01:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:57.763 05:01:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.763 05:01:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.763 05:01:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.764 05:01:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.022 05:01:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.022 "name": "Existed_Raid", 00:18:58.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.022 "strip_size_kb": 64, 00:18:58.022 "state": "configuring", 00:18:58.022 "raid_level": "raid0", 00:18:58.022 "superblock": false, 00:18:58.022 "num_base_bdevs": 4, 00:18:58.022 "num_base_bdevs_discovered": 1, 00:18:58.022 "num_base_bdevs_operational": 4, 00:18:58.022 "base_bdevs_list": [ 00:18:58.022 { 00:18:58.022 "name": "BaseBdev1", 00:18:58.022 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:18:58.022 "is_configured": true, 00:18:58.022 "data_offset": 0, 00:18:58.022 "data_size": 65536 00:18:58.022 }, 00:18:58.022 { 00:18:58.022 "name": "BaseBdev2", 00:18:58.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.022 "is_configured": false, 00:18:58.022 "data_offset": 0, 00:18:58.022 "data_size": 0 00:18:58.022 }, 00:18:58.022 { 00:18:58.022 "name": "BaseBdev3", 00:18:58.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.022 "is_configured": false, 00:18:58.022 "data_offset": 0, 00:18:58.022 "data_size": 0 00:18:58.022 }, 00:18:58.022 { 00:18:58.022 "name": "BaseBdev4", 00:18:58.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.022 "is_configured": false, 00:18:58.022 "data_offset": 0, 00:18:58.022 "data_size": 0 00:18:58.022 } 00:18:58.022 ] 00:18:58.022 }' 00:18:58.022 05:01:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.022 05:01:27 -- common/autotest_common.sh@10 -- # set +x 00:18:58.589 05:01:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:58.863 [2024-04-27 05:01:28.633523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.863 [2024-04-27 05:01:28.633911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:58.863 05:01:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:58.863 05:01:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:59.126 [2024-04-27 05:01:28.861661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.126 [2024-04-27 05:01:28.864432] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.126 [2024-04-27 05:01:28.864669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.126 [2024-04-27 05:01:28.864789] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.126 [2024-04-27 05:01:28.864936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.126 [2024-04-27 05:01:28.865042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.126 [2024-04-27 05:01:28.865105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.126 05:01:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.384 05:01:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.384 "name": "Existed_Raid", 00:18:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.384 "strip_size_kb": 64, 00:18:59.384 "state": "configuring", 00:18:59.384 "raid_level": "raid0", 00:18:59.384 "superblock": false, 00:18:59.384 "num_base_bdevs": 4, 00:18:59.384 "num_base_bdevs_discovered": 1, 00:18:59.384 "num_base_bdevs_operational": 4, 00:18:59.384 "base_bdevs_list": [ 00:18:59.384 { 00:18:59.384 "name": "BaseBdev1", 00:18:59.384 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:18:59.384 "is_configured": true, 00:18:59.384 "data_offset": 0, 00:18:59.384 "data_size": 65536 00:18:59.384 }, 00:18:59.384 { 00:18:59.384 "name": "BaseBdev2", 00:18:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.384 "is_configured": false, 00:18:59.384 "data_offset": 0, 00:18:59.384 "data_size": 0 00:18:59.384 }, 00:18:59.384 { 00:18:59.384 "name": "BaseBdev3", 00:18:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.384 "is_configured": false, 00:18:59.384 "data_offset": 0, 00:18:59.384 "data_size": 0 00:18:59.384 }, 00:18:59.384 { 00:18:59.384 "name": "BaseBdev4", 00:18:59.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.384 "is_configured": false, 00:18:59.384 "data_offset": 0, 00:18:59.384 "data_size": 0 00:18:59.384 } 00:18:59.384 ] 00:18:59.384 }' 00:18:59.384 05:01:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.384 05:01:29 -- common/autotest_common.sh@10 -- # set +x 00:18:59.948 05:01:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.206 [2024-04-27 05:01:30.082832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.206 BaseBdev2 00:19:00.206 05:01:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:00.206 05:01:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:00.206 05:01:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:00.206 05:01:30 -- common/autotest_common.sh@889 -- # local i 00:19:00.206 05:01:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:00.206 05:01:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:00.552 05:01:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.552 05:01:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.810 [ 00:19:00.810 { 00:19:00.810 "name": "BaseBdev2", 00:19:00.810 "aliases": [ 00:19:00.810 "9db1439a-efa8-4f86-b633-24e91ea82e7d" 00:19:00.810 ], 00:19:00.810 "product_name": "Malloc disk", 00:19:00.810 "block_size": 512, 00:19:00.810 "num_blocks": 65536, 00:19:00.810 "uuid": "9db1439a-efa8-4f86-b633-24e91ea82e7d", 00:19:00.810 "assigned_rate_limits": { 00:19:00.810 "rw_ios_per_sec": 0, 00:19:00.810 "rw_mbytes_per_sec": 0, 00:19:00.810 "r_mbytes_per_sec": 0, 00:19:00.810 "w_mbytes_per_sec": 0 00:19:00.810 }, 00:19:00.810 "claimed": true, 00:19:00.810 "claim_type": "exclusive_write", 00:19:00.810 "zoned": false, 00:19:00.810 "supported_io_types": { 00:19:00.810 "read": true, 00:19:00.810 "write": true, 00:19:00.810 "unmap": true, 00:19:00.810 "write_zeroes": true, 00:19:00.810 "flush": true, 00:19:00.810 "reset": true, 00:19:00.810 "compare": false, 00:19:00.810 "compare_and_write": false, 00:19:00.810 "abort": true, 00:19:00.810 "nvme_admin": false, 00:19:00.810 "nvme_io": false 00:19:00.810 }, 00:19:00.810 "memory_domains": [ 00:19:00.810 { 00:19:00.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.810 "dma_device_type": 2 00:19:00.810 } 00:19:00.810 ], 00:19:00.810 "driver_specific": {} 00:19:00.810 } 00:19:00.810 ] 00:19:00.810 05:01:30 -- common/autotest_common.sh@895 -- # return 0 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.810 05:01:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.068 05:01:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.068 "name": "Existed_Raid", 00:19:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.068 "strip_size_kb": 64, 00:19:01.068 "state": "configuring", 00:19:01.068 "raid_level": "raid0", 00:19:01.068 "superblock": false, 00:19:01.068 "num_base_bdevs": 4, 00:19:01.068 "num_base_bdevs_discovered": 2, 00:19:01.068 "num_base_bdevs_operational": 4, 00:19:01.068 "base_bdevs_list": [ 00:19:01.068 { 00:19:01.068 "name": "BaseBdev1", 00:19:01.068 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:19:01.068 "is_configured": true, 00:19:01.068 "data_offset": 0, 00:19:01.068 "data_size": 65536 00:19:01.068 }, 00:19:01.068 { 00:19:01.068 "name": "BaseBdev2", 00:19:01.068 "uuid": "9db1439a-efa8-4f86-b633-24e91ea82e7d", 00:19:01.068 "is_configured": true, 00:19:01.068 "data_offset": 0, 00:19:01.068 "data_size": 65536 00:19:01.068 }, 00:19:01.068 { 00:19:01.068 "name": "BaseBdev3", 00:19:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.068 "is_configured": false, 00:19:01.068 "data_offset": 0, 00:19:01.068 "data_size": 0 00:19:01.068 }, 00:19:01.068 { 00:19:01.068 "name": "BaseBdev4", 00:19:01.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.068 "is_configured": false, 00:19:01.068 "data_offset": 0, 00:19:01.068 "data_size": 0 00:19:01.068 } 00:19:01.068 ] 00:19:01.068 }' 00:19:01.068 05:01:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.068 05:01:30 -- common/autotest_common.sh@10 -- # set +x 00:19:01.648 05:01:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:01.906 [2024-04-27 05:01:31.752171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:01.906 BaseBdev3 00:19:01.906 05:01:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:01.906 05:01:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:01.906 05:01:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.906 05:01:31 -- common/autotest_common.sh@889 -- # local i 00:19:01.906 05:01:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.906 05:01:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.906 05:01:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.162 05:01:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:02.419 [ 00:19:02.419 { 00:19:02.419 "name": "BaseBdev3", 00:19:02.419 "aliases": [ 00:19:02.419 "e2b64590-5256-4b0e-820d-96e272d28201" 00:19:02.419 ], 00:19:02.419 "product_name": "Malloc disk", 00:19:02.419 "block_size": 512, 00:19:02.419 "num_blocks": 65536, 00:19:02.419 "uuid": "e2b64590-5256-4b0e-820d-96e272d28201", 00:19:02.419 "assigned_rate_limits": { 00:19:02.419 "rw_ios_per_sec": 0, 00:19:02.419 "rw_mbytes_per_sec": 0, 00:19:02.419 "r_mbytes_per_sec": 0, 00:19:02.419 "w_mbytes_per_sec": 0 00:19:02.419 }, 00:19:02.419 "claimed": true, 00:19:02.419 "claim_type": "exclusive_write", 00:19:02.419 "zoned": false, 00:19:02.419 "supported_io_types": { 00:19:02.419 "read": true, 00:19:02.419 "write": true, 00:19:02.419 "unmap": true, 00:19:02.419 "write_zeroes": true, 00:19:02.419 "flush": true, 00:19:02.419 "reset": true, 00:19:02.419 "compare": false, 00:19:02.419 "compare_and_write": false, 00:19:02.419 "abort": true, 00:19:02.419 "nvme_admin": false, 00:19:02.419 "nvme_io": false 00:19:02.419 }, 00:19:02.419 "memory_domains": [ 00:19:02.419 { 00:19:02.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.419 "dma_device_type": 2 00:19:02.419 } 00:19:02.419 ], 00:19:02.419 "driver_specific": {} 00:19:02.419 } 00:19:02.419 ] 00:19:02.419 05:01:32 -- common/autotest_common.sh@895 -- # return 0 00:19:02.419 05:01:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:02.419 05:01:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.420 05:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.678 05:01:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.678 "name": "Existed_Raid", 00:19:02.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.678 "strip_size_kb": 64, 00:19:02.678 "state": "configuring", 00:19:02.678 "raid_level": "raid0", 00:19:02.678 "superblock": false, 00:19:02.678 "num_base_bdevs": 4, 00:19:02.678 "num_base_bdevs_discovered": 3, 00:19:02.678 "num_base_bdevs_operational": 4, 00:19:02.678 "base_bdevs_list": [ 00:19:02.678 { 00:19:02.678 "name": "BaseBdev1", 00:19:02.678 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:19:02.678 "is_configured": true, 00:19:02.678 "data_offset": 0, 00:19:02.678 "data_size": 65536 00:19:02.678 }, 00:19:02.678 { 00:19:02.678 "name": "BaseBdev2", 00:19:02.678 "uuid": "9db1439a-efa8-4f86-b633-24e91ea82e7d", 00:19:02.678 "is_configured": true, 00:19:02.678 "data_offset": 0, 00:19:02.678 "data_size": 65536 00:19:02.678 }, 00:19:02.678 { 00:19:02.678 "name": "BaseBdev3", 00:19:02.678 "uuid": "e2b64590-5256-4b0e-820d-96e272d28201", 00:19:02.678 "is_configured": true, 00:19:02.678 "data_offset": 0, 00:19:02.678 "data_size": 65536 00:19:02.678 }, 00:19:02.678 { 00:19:02.678 "name": "BaseBdev4", 00:19:02.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.678 "is_configured": false, 00:19:02.678 "data_offset": 0, 00:19:02.678 "data_size": 0 00:19:02.678 } 00:19:02.678 ] 00:19:02.678 }' 00:19:02.678 05:01:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.678 05:01:32 -- common/autotest_common.sh@10 -- # set +x 00:19:03.613 05:01:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:03.613 [2024-04-27 05:01:33.461797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:03.613 [2024-04-27 05:01:33.462132] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:03.613 [2024-04-27 05:01:33.462184] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:03.613 [2024-04-27 05:01:33.462524] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:03.613 [2024-04-27 05:01:33.463101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:03.613 [2024-04-27 05:01:33.463228] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:03.613 [2024-04-27 05:01:33.463648] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.613 BaseBdev4 00:19:03.613 05:01:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:03.613 05:01:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:03.613 05:01:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:03.613 05:01:33 -- common/autotest_common.sh@889 -- # local i 00:19:03.613 05:01:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:03.613 05:01:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:03.613 05:01:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.872 05:01:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:04.132 [ 00:19:04.132 { 00:19:04.132 "name": "BaseBdev4", 00:19:04.132 "aliases": [ 00:19:04.132 "065ee852-8a67-4cb7-954c-7cd8d6ec365e" 00:19:04.132 ], 00:19:04.132 "product_name": "Malloc disk", 00:19:04.132 "block_size": 512, 00:19:04.132 "num_blocks": 65536, 00:19:04.132 "uuid": "065ee852-8a67-4cb7-954c-7cd8d6ec365e", 00:19:04.132 "assigned_rate_limits": { 00:19:04.132 "rw_ios_per_sec": 0, 00:19:04.132 "rw_mbytes_per_sec": 0, 00:19:04.132 "r_mbytes_per_sec": 0, 00:19:04.132 "w_mbytes_per_sec": 0 00:19:04.132 }, 00:19:04.132 "claimed": true, 00:19:04.132 "claim_type": "exclusive_write", 00:19:04.132 "zoned": false, 00:19:04.132 "supported_io_types": { 00:19:04.132 "read": true, 00:19:04.132 "write": true, 00:19:04.132 "unmap": true, 00:19:04.132 "write_zeroes": true, 00:19:04.132 "flush": true, 00:19:04.132 "reset": true, 00:19:04.132 "compare": false, 00:19:04.132 "compare_and_write": false, 00:19:04.132 "abort": true, 00:19:04.132 "nvme_admin": false, 00:19:04.132 "nvme_io": false 00:19:04.132 }, 00:19:04.132 "memory_domains": [ 00:19:04.132 { 00:19:04.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.132 "dma_device_type": 2 00:19:04.132 } 00:19:04.132 ], 00:19:04.132 "driver_specific": {} 00:19:04.132 } 00:19:04.132 ] 00:19:04.132 05:01:34 -- common/autotest_common.sh@895 -- # return 0 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.132 05:01:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.391 05:01:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.391 "name": "Existed_Raid", 00:19:04.391 "uuid": "5d058dad-1b29-4f88-b26a-42846357e30e", 00:19:04.391 "strip_size_kb": 64, 00:19:04.391 "state": "online", 00:19:04.391 "raid_level": "raid0", 00:19:04.391 "superblock": false, 00:19:04.391 "num_base_bdevs": 4, 00:19:04.391 "num_base_bdevs_discovered": 4, 00:19:04.391 "num_base_bdevs_operational": 4, 00:19:04.391 "base_bdevs_list": [ 00:19:04.391 { 00:19:04.391 "name": "BaseBdev1", 00:19:04.391 "uuid": "5d550cd2-cc23-4322-ae08-344f14ac48d1", 00:19:04.391 "is_configured": true, 00:19:04.391 "data_offset": 0, 00:19:04.391 "data_size": 65536 00:19:04.391 }, 00:19:04.391 { 00:19:04.391 "name": "BaseBdev2", 00:19:04.391 "uuid": "9db1439a-efa8-4f86-b633-24e91ea82e7d", 00:19:04.391 "is_configured": true, 00:19:04.391 "data_offset": 0, 00:19:04.391 "data_size": 65536 00:19:04.391 }, 00:19:04.391 { 00:19:04.391 "name": "BaseBdev3", 00:19:04.391 "uuid": "e2b64590-5256-4b0e-820d-96e272d28201", 00:19:04.391 "is_configured": true, 00:19:04.391 "data_offset": 0, 00:19:04.391 "data_size": 65536 00:19:04.391 }, 00:19:04.391 { 00:19:04.391 "name": "BaseBdev4", 00:19:04.391 "uuid": "065ee852-8a67-4cb7-954c-7cd8d6ec365e", 00:19:04.391 "is_configured": true, 00:19:04.391 "data_offset": 0, 00:19:04.391 "data_size": 65536 00:19:04.391 } 00:19:04.391 ] 00:19:04.391 }' 00:19:04.391 05:01:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.391 05:01:34 -- common/autotest_common.sh@10 -- # set +x 00:19:05.326 05:01:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.327 [2024-04-27 05:01:35.170485] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.327 [2024-04-27 05:01:35.170790] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.327 [2024-04-27 05:01:35.171015] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.588 05:01:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.588 "name": "Existed_Raid", 00:19:05.588 "uuid": "5d058dad-1b29-4f88-b26a-42846357e30e", 00:19:05.588 "strip_size_kb": 64, 00:19:05.588 "state": "offline", 00:19:05.588 "raid_level": "raid0", 00:19:05.588 "superblock": false, 00:19:05.588 "num_base_bdevs": 4, 00:19:05.588 "num_base_bdevs_discovered": 3, 00:19:05.588 "num_base_bdevs_operational": 3, 00:19:05.588 "base_bdevs_list": [ 00:19:05.588 { 00:19:05.588 "name": null, 00:19:05.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.588 "is_configured": false, 00:19:05.588 "data_offset": 0, 00:19:05.588 "data_size": 65536 00:19:05.588 }, 00:19:05.588 { 00:19:05.588 "name": "BaseBdev2", 00:19:05.588 "uuid": "9db1439a-efa8-4f86-b633-24e91ea82e7d", 00:19:05.588 "is_configured": true, 00:19:05.588 "data_offset": 0, 00:19:05.588 "data_size": 65536 00:19:05.588 }, 00:19:05.588 { 00:19:05.589 "name": "BaseBdev3", 00:19:05.589 "uuid": "e2b64590-5256-4b0e-820d-96e272d28201", 00:19:05.589 "is_configured": true, 00:19:05.589 "data_offset": 0, 00:19:05.589 "data_size": 65536 00:19:05.589 }, 00:19:05.589 { 00:19:05.589 "name": "BaseBdev4", 00:19:05.589 "uuid": "065ee852-8a67-4cb7-954c-7cd8d6ec365e", 00:19:05.589 "is_configured": true, 00:19:05.589 "data_offset": 0, 00:19:05.589 "data_size": 65536 00:19:05.589 } 00:19:05.589 ] 00:19:05.589 }' 00:19:05.589 05:01:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.589 05:01:35 -- common/autotest_common.sh@10 -- # set +x 00:19:06.521 05:01:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:06.521 05:01:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:06.521 05:01:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.521 05:01:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:06.778 05:01:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:06.778 05:01:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:06.778 05:01:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:07.036 [2024-04-27 05:01:36.701881] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.036 05:01:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.036 05:01:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.036 05:01:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.036 05:01:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.294 05:01:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.294 05:01:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.294 05:01:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:07.553 [2024-04-27 05:01:37.230972] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:07.553 05:01:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.553 05:01:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.553 05:01:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.553 05:01:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.811 05:01:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.811 05:01:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.811 05:01:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:08.069 [2024-04-27 05:01:37.809011] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:08.069 [2024-04-27 05:01:37.809408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:08.069 05:01:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:08.069 05:01:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:08.069 05:01:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:08.069 05:01:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.327 05:01:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:08.327 05:01:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:08.327 05:01:38 -- bdev/bdev_raid.sh@287 -- # killprocess 130688 00:19:08.327 05:01:38 -- common/autotest_common.sh@926 -- # '[' -z 130688 ']' 00:19:08.327 05:01:38 -- common/autotest_common.sh@930 -- # kill -0 130688 00:19:08.327 05:01:38 -- common/autotest_common.sh@931 -- # uname 00:19:08.327 05:01:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.327 05:01:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130688 00:19:08.327 05:01:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:08.327 05:01:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:08.327 05:01:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130688' 00:19:08.327 killing process with pid 130688 00:19:08.327 05:01:38 -- common/autotest_common.sh@945 -- # kill 130688 00:19:08.327 [2024-04-27 05:01:38.171845] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.327 05:01:38 -- common/autotest_common.sh@950 -- # wait 130688 00:19:08.327 [2024-04-27 05:01:38.172111] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.893 05:01:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:08.893 00:19:08.893 real 0m14.646s 00:19:08.893 user 0m26.596s 00:19:08.893 sys 0m1.972s 00:19:08.893 ************************************ 00:19:08.893 END TEST raid_state_function_test 00:19:08.893 ************************************ 00:19:08.893 05:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.893 05:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.893 05:01:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:19:08.893 05:01:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:08.893 05:01:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.893 05:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:09.151 ************************************ 00:19:09.151 START TEST raid_state_function_test_sb 00:19:09.151 ************************************ 00:19:09.151 05:01:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=131141 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131141' 00:19:09.151 Process raid pid: 131141 00:19:09.151 05:01:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131141 /var/tmp/spdk-raid.sock 00:19:09.151 05:01:38 -- common/autotest_common.sh@819 -- # '[' -z 131141 ']' 00:19:09.151 05:01:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.151 05:01:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.151 05:01:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.151 05:01:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.151 05:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:09.151 [2024-04-27 05:01:38.877505] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:09.151 [2024-04-27 05:01:38.878107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.410 [2024-04-27 05:01:39.052470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.410 [2024-04-27 05:01:39.216198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.669 [2024-04-27 05:01:39.383932] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.234 05:01:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:10.234 05:01:39 -- common/autotest_common.sh@852 -- # return 0 00:19:10.234 05:01:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:10.517 [2024-04-27 05:01:40.145636] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.517 [2024-04-27 05:01:40.146046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.517 [2024-04-27 05:01:40.146182] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.517 [2024-04-27 05:01:40.146255] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.517 [2024-04-27 05:01:40.146358] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.517 [2024-04-27 05:01:40.147174] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.517 [2024-04-27 05:01:40.147321] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:10.517 [2024-04-27 05:01:40.147511] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.517 05:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.785 05:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.785 "name": "Existed_Raid", 00:19:10.785 "uuid": "28342792-3111-4c71-ae6b-73bbce4e1486", 00:19:10.785 "strip_size_kb": 64, 00:19:10.785 "state": "configuring", 00:19:10.785 "raid_level": "raid0", 00:19:10.785 "superblock": true, 00:19:10.785 "num_base_bdevs": 4, 00:19:10.785 "num_base_bdevs_discovered": 0, 00:19:10.785 "num_base_bdevs_operational": 4, 00:19:10.785 "base_bdevs_list": [ 00:19:10.785 { 00:19:10.785 "name": "BaseBdev1", 00:19:10.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.785 "is_configured": false, 00:19:10.785 "data_offset": 0, 00:19:10.785 "data_size": 0 00:19:10.785 }, 00:19:10.785 { 00:19:10.785 "name": "BaseBdev2", 00:19:10.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.785 "is_configured": false, 00:19:10.785 "data_offset": 0, 00:19:10.785 "data_size": 0 00:19:10.785 }, 00:19:10.785 { 00:19:10.785 "name": "BaseBdev3", 00:19:10.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.785 "is_configured": false, 00:19:10.785 "data_offset": 0, 00:19:10.785 "data_size": 0 00:19:10.785 }, 00:19:10.785 { 00:19:10.785 "name": "BaseBdev4", 00:19:10.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.785 "is_configured": false, 00:19:10.785 "data_offset": 0, 00:19:10.785 "data_size": 0 00:19:10.785 } 00:19:10.785 ] 00:19:10.785 }' 00:19:10.785 05:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.785 05:01:40 -- common/autotest_common.sh@10 -- # set +x 00:19:11.351 05:01:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.608 [2024-04-27 05:01:41.329660] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.608 [2024-04-27 05:01:41.330043] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:11.608 05:01:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.865 [2024-04-27 05:01:41.573810] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.865 [2024-04-27 05:01:41.574753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.865 [2024-04-27 05:01:41.574912] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.865 [2024-04-27 05:01:41.575129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.865 [2024-04-27 05:01:41.575259] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.865 [2024-04-27 05:01:41.575549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.865 [2024-04-27 05:01:41.575686] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.865 [2024-04-27 05:01:41.575953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.865 05:01:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.123 [2024-04-27 05:01:41.844026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.123 BaseBdev1 00:19:12.123 05:01:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:12.123 05:01:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:12.123 05:01:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:12.123 05:01:41 -- common/autotest_common.sh@889 -- # local i 00:19:12.123 05:01:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:12.123 05:01:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:12.123 05:01:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.381 05:01:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.640 [ 00:19:12.640 { 00:19:12.640 "name": "BaseBdev1", 00:19:12.640 "aliases": [ 00:19:12.640 "e79923d2-d6d0-4167-8f67-9b7f5ade4984" 00:19:12.640 ], 00:19:12.640 "product_name": "Malloc disk", 00:19:12.640 "block_size": 512, 00:19:12.640 "num_blocks": 65536, 00:19:12.640 "uuid": "e79923d2-d6d0-4167-8f67-9b7f5ade4984", 00:19:12.640 "assigned_rate_limits": { 00:19:12.640 "rw_ios_per_sec": 0, 00:19:12.640 "rw_mbytes_per_sec": 0, 00:19:12.640 "r_mbytes_per_sec": 0, 00:19:12.640 "w_mbytes_per_sec": 0 00:19:12.640 }, 00:19:12.640 "claimed": true, 00:19:12.640 "claim_type": "exclusive_write", 00:19:12.640 "zoned": false, 00:19:12.640 "supported_io_types": { 00:19:12.640 "read": true, 00:19:12.640 "write": true, 00:19:12.640 "unmap": true, 00:19:12.640 "write_zeroes": true, 00:19:12.640 "flush": true, 00:19:12.640 "reset": true, 00:19:12.640 "compare": false, 00:19:12.640 "compare_and_write": false, 00:19:12.640 "abort": true, 00:19:12.640 "nvme_admin": false, 00:19:12.640 "nvme_io": false 00:19:12.640 }, 00:19:12.640 "memory_domains": [ 00:19:12.640 { 00:19:12.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.640 "dma_device_type": 2 00:19:12.640 } 00:19:12.640 ], 00:19:12.640 "driver_specific": {} 00:19:12.640 } 00:19:12.640 ] 00:19:12.640 05:01:42 -- common/autotest_common.sh@895 -- # return 0 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.640 05:01:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.898 05:01:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.898 "name": "Existed_Raid", 00:19:12.898 "uuid": "872c7e67-b4f5-4dd2-903b-c45f3452a231", 00:19:12.898 "strip_size_kb": 64, 00:19:12.898 "state": "configuring", 00:19:12.898 "raid_level": "raid0", 00:19:12.898 "superblock": true, 00:19:12.898 "num_base_bdevs": 4, 00:19:12.898 "num_base_bdevs_discovered": 1, 00:19:12.898 "num_base_bdevs_operational": 4, 00:19:12.898 "base_bdevs_list": [ 00:19:12.898 { 00:19:12.898 "name": "BaseBdev1", 00:19:12.898 "uuid": "e79923d2-d6d0-4167-8f67-9b7f5ade4984", 00:19:12.898 "is_configured": true, 00:19:12.898 "data_offset": 2048, 00:19:12.898 "data_size": 63488 00:19:12.898 }, 00:19:12.898 { 00:19:12.898 "name": "BaseBdev2", 00:19:12.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.898 "is_configured": false, 00:19:12.898 "data_offset": 0, 00:19:12.898 "data_size": 0 00:19:12.898 }, 00:19:12.898 { 00:19:12.898 "name": "BaseBdev3", 00:19:12.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.898 "is_configured": false, 00:19:12.898 "data_offset": 0, 00:19:12.898 "data_size": 0 00:19:12.898 }, 00:19:12.898 { 00:19:12.899 "name": "BaseBdev4", 00:19:12.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.899 "is_configured": false, 00:19:12.899 "data_offset": 0, 00:19:12.899 "data_size": 0 00:19:12.899 } 00:19:12.899 ] 00:19:12.899 }' 00:19:12.899 05:01:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.899 05:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:13.464 05:01:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.030 [2024-04-27 05:01:43.624577] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.030 [2024-04-27 05:01:43.624939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:14.030 05:01:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:14.030 05:01:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:14.030 05:01:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.288 BaseBdev1 00:19:14.288 05:01:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:14.288 05:01:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:14.288 05:01:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.288 05:01:44 -- common/autotest_common.sh@889 -- # local i 00:19:14.288 05:01:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.288 05:01:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.288 05:01:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.547 05:01:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.805 [ 00:19:14.805 { 00:19:14.805 "name": "BaseBdev1", 00:19:14.805 "aliases": [ 00:19:14.805 "298c32e7-d31f-425e-8153-b8592e0b112b" 00:19:14.805 ], 00:19:14.805 "product_name": "Malloc disk", 00:19:14.805 "block_size": 512, 00:19:14.805 "num_blocks": 65536, 00:19:14.805 "uuid": "298c32e7-d31f-425e-8153-b8592e0b112b", 00:19:14.805 "assigned_rate_limits": { 00:19:14.805 "rw_ios_per_sec": 0, 00:19:14.805 "rw_mbytes_per_sec": 0, 00:19:14.805 "r_mbytes_per_sec": 0, 00:19:14.805 "w_mbytes_per_sec": 0 00:19:14.805 }, 00:19:14.805 "claimed": false, 00:19:14.805 "zoned": false, 00:19:14.805 "supported_io_types": { 00:19:14.805 "read": true, 00:19:14.805 "write": true, 00:19:14.805 "unmap": true, 00:19:14.805 "write_zeroes": true, 00:19:14.805 "flush": true, 00:19:14.805 "reset": true, 00:19:14.805 "compare": false, 00:19:14.805 "compare_and_write": false, 00:19:14.805 "abort": true, 00:19:14.805 "nvme_admin": false, 00:19:14.805 "nvme_io": false 00:19:14.805 }, 00:19:14.805 "memory_domains": [ 00:19:14.805 { 00:19:14.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.805 "dma_device_type": 2 00:19:14.805 } 00:19:14.805 ], 00:19:14.805 "driver_specific": {} 00:19:14.805 } 00:19:14.805 ] 00:19:14.805 05:01:44 -- common/autotest_common.sh@895 -- # return 0 00:19:14.805 05:01:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.063 [2024-04-27 05:01:44.921629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.063 [2024-04-27 05:01:44.924578] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.063 [2024-04-27 05:01:44.924855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.063 [2024-04-27 05:01:44.924991] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.063 [2024-04-27 05:01:44.925067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.063 [2024-04-27 05:01:44.925297] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:15.063 [2024-04-27 05:01:44.925368] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.063 05:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.320 05:01:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.320 "name": "Existed_Raid", 00:19:15.320 "uuid": "a864fffe-9a33-4024-ab7b-f70a863be5c3", 00:19:15.320 "strip_size_kb": 64, 00:19:15.320 "state": "configuring", 00:19:15.320 "raid_level": "raid0", 00:19:15.320 "superblock": true, 00:19:15.320 "num_base_bdevs": 4, 00:19:15.320 "num_base_bdevs_discovered": 1, 00:19:15.320 "num_base_bdevs_operational": 4, 00:19:15.320 "base_bdevs_list": [ 00:19:15.320 { 00:19:15.320 "name": "BaseBdev1", 00:19:15.320 "uuid": "298c32e7-d31f-425e-8153-b8592e0b112b", 00:19:15.320 "is_configured": true, 00:19:15.320 "data_offset": 2048, 00:19:15.320 "data_size": 63488 00:19:15.320 }, 00:19:15.320 { 00:19:15.320 "name": "BaseBdev2", 00:19:15.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.320 "is_configured": false, 00:19:15.320 "data_offset": 0, 00:19:15.320 "data_size": 0 00:19:15.320 }, 00:19:15.320 { 00:19:15.320 "name": "BaseBdev3", 00:19:15.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.320 "is_configured": false, 00:19:15.320 "data_offset": 0, 00:19:15.320 "data_size": 0 00:19:15.320 }, 00:19:15.320 { 00:19:15.320 "name": "BaseBdev4", 00:19:15.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.320 "is_configured": false, 00:19:15.320 "data_offset": 0, 00:19:15.320 "data_size": 0 00:19:15.320 } 00:19:15.320 ] 00:19:15.320 }' 00:19:15.320 05:01:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.320 05:01:45 -- common/autotest_common.sh@10 -- # set +x 00:19:16.253 05:01:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:16.253 [2024-04-27 05:01:46.111351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.253 BaseBdev2 00:19:16.253 05:01:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:16.253 05:01:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:16.253 05:01:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:16.253 05:01:46 -- common/autotest_common.sh@889 -- # local i 00:19:16.253 05:01:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:16.253 05:01:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:16.253 05:01:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.511 05:01:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:16.769 [ 00:19:16.769 { 00:19:16.769 "name": "BaseBdev2", 00:19:16.769 "aliases": [ 00:19:16.769 "64b8e158-4f1b-4d4a-844b-203231945982" 00:19:16.769 ], 00:19:16.769 "product_name": "Malloc disk", 00:19:16.769 "block_size": 512, 00:19:16.769 "num_blocks": 65536, 00:19:16.769 "uuid": "64b8e158-4f1b-4d4a-844b-203231945982", 00:19:16.769 "assigned_rate_limits": { 00:19:16.769 "rw_ios_per_sec": 0, 00:19:16.769 "rw_mbytes_per_sec": 0, 00:19:16.769 "r_mbytes_per_sec": 0, 00:19:16.769 "w_mbytes_per_sec": 0 00:19:16.769 }, 00:19:16.769 "claimed": true, 00:19:16.769 "claim_type": "exclusive_write", 00:19:16.769 "zoned": false, 00:19:16.769 "supported_io_types": { 00:19:16.769 "read": true, 00:19:16.769 "write": true, 00:19:16.769 "unmap": true, 00:19:16.769 "write_zeroes": true, 00:19:16.769 "flush": true, 00:19:16.769 "reset": true, 00:19:16.769 "compare": false, 00:19:16.769 "compare_and_write": false, 00:19:16.769 "abort": true, 00:19:16.769 "nvme_admin": false, 00:19:16.769 "nvme_io": false 00:19:16.769 }, 00:19:16.769 "memory_domains": [ 00:19:16.769 { 00:19:16.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.769 "dma_device_type": 2 00:19:16.769 } 00:19:16.769 ], 00:19:16.769 "driver_specific": {} 00:19:16.769 } 00:19:16.769 ] 00:19:16.769 05:01:46 -- common/autotest_common.sh@895 -- # return 0 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.769 05:01:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.027 05:01:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.027 "name": "Existed_Raid", 00:19:17.027 "uuid": "a864fffe-9a33-4024-ab7b-f70a863be5c3", 00:19:17.027 "strip_size_kb": 64, 00:19:17.027 "state": "configuring", 00:19:17.027 "raid_level": "raid0", 00:19:17.027 "superblock": true, 00:19:17.027 "num_base_bdevs": 4, 00:19:17.027 "num_base_bdevs_discovered": 2, 00:19:17.027 "num_base_bdevs_operational": 4, 00:19:17.027 "base_bdevs_list": [ 00:19:17.027 { 00:19:17.027 "name": "BaseBdev1", 00:19:17.027 "uuid": "298c32e7-d31f-425e-8153-b8592e0b112b", 00:19:17.027 "is_configured": true, 00:19:17.027 "data_offset": 2048, 00:19:17.027 "data_size": 63488 00:19:17.027 }, 00:19:17.027 { 00:19:17.027 "name": "BaseBdev2", 00:19:17.027 "uuid": "64b8e158-4f1b-4d4a-844b-203231945982", 00:19:17.027 "is_configured": true, 00:19:17.027 "data_offset": 2048, 00:19:17.027 "data_size": 63488 00:19:17.027 }, 00:19:17.027 { 00:19:17.027 "name": "BaseBdev3", 00:19:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.027 "is_configured": false, 00:19:17.027 "data_offset": 0, 00:19:17.027 "data_size": 0 00:19:17.027 }, 00:19:17.027 { 00:19:17.027 "name": "BaseBdev4", 00:19:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.027 "is_configured": false, 00:19:17.027 "data_offset": 0, 00:19:17.027 "data_size": 0 00:19:17.027 } 00:19:17.027 ] 00:19:17.027 }' 00:19:17.027 05:01:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.027 05:01:46 -- common/autotest_common.sh@10 -- # set +x 00:19:17.961 05:01:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:17.961 [2024-04-27 05:01:47.781800] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.961 BaseBdev3 00:19:17.961 05:01:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:17.961 05:01:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:17.961 05:01:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.961 05:01:47 -- common/autotest_common.sh@889 -- # local i 00:19:17.961 05:01:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.961 05:01:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.961 05:01:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.219 05:01:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:18.477 [ 00:19:18.477 { 00:19:18.477 "name": "BaseBdev3", 00:19:18.477 "aliases": [ 00:19:18.477 "2a8ea26a-5897-4eb1-9de5-bda26bf88ef7" 00:19:18.477 ], 00:19:18.477 "product_name": "Malloc disk", 00:19:18.477 "block_size": 512, 00:19:18.477 "num_blocks": 65536, 00:19:18.477 "uuid": "2a8ea26a-5897-4eb1-9de5-bda26bf88ef7", 00:19:18.477 "assigned_rate_limits": { 00:19:18.477 "rw_ios_per_sec": 0, 00:19:18.477 "rw_mbytes_per_sec": 0, 00:19:18.477 "r_mbytes_per_sec": 0, 00:19:18.477 "w_mbytes_per_sec": 0 00:19:18.477 }, 00:19:18.477 "claimed": true, 00:19:18.477 "claim_type": "exclusive_write", 00:19:18.477 "zoned": false, 00:19:18.477 "supported_io_types": { 00:19:18.477 "read": true, 00:19:18.477 "write": true, 00:19:18.477 "unmap": true, 00:19:18.477 "write_zeroes": true, 00:19:18.477 "flush": true, 00:19:18.477 "reset": true, 00:19:18.477 "compare": false, 00:19:18.477 "compare_and_write": false, 00:19:18.477 "abort": true, 00:19:18.477 "nvme_admin": false, 00:19:18.477 "nvme_io": false 00:19:18.477 }, 00:19:18.477 "memory_domains": [ 00:19:18.477 { 00:19:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.477 "dma_device_type": 2 00:19:18.477 } 00:19:18.477 ], 00:19:18.477 "driver_specific": {} 00:19:18.477 } 00:19:18.477 ] 00:19:18.477 05:01:48 -- common/autotest_common.sh@895 -- # return 0 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.477 05:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.736 05:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.736 "name": "Existed_Raid", 00:19:18.736 "uuid": "a864fffe-9a33-4024-ab7b-f70a863be5c3", 00:19:18.736 "strip_size_kb": 64, 00:19:18.736 "state": "configuring", 00:19:18.736 "raid_level": "raid0", 00:19:18.736 "superblock": true, 00:19:18.736 "num_base_bdevs": 4, 00:19:18.736 "num_base_bdevs_discovered": 3, 00:19:18.736 "num_base_bdevs_operational": 4, 00:19:18.736 "base_bdevs_list": [ 00:19:18.736 { 00:19:18.736 "name": "BaseBdev1", 00:19:18.736 "uuid": "298c32e7-d31f-425e-8153-b8592e0b112b", 00:19:18.736 "is_configured": true, 00:19:18.736 "data_offset": 2048, 00:19:18.736 "data_size": 63488 00:19:18.736 }, 00:19:18.736 { 00:19:18.736 "name": "BaseBdev2", 00:19:18.736 "uuid": "64b8e158-4f1b-4d4a-844b-203231945982", 00:19:18.736 "is_configured": true, 00:19:18.736 "data_offset": 2048, 00:19:18.736 "data_size": 63488 00:19:18.736 }, 00:19:18.736 { 00:19:18.736 "name": "BaseBdev3", 00:19:18.736 "uuid": "2a8ea26a-5897-4eb1-9de5-bda26bf88ef7", 00:19:18.736 "is_configured": true, 00:19:18.736 "data_offset": 2048, 00:19:18.736 "data_size": 63488 00:19:18.736 }, 00:19:18.736 { 00:19:18.736 "name": "BaseBdev4", 00:19:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.736 "is_configured": false, 00:19:18.736 "data_offset": 0, 00:19:18.736 "data_size": 0 00:19:18.736 } 00:19:18.736 ] 00:19:18.736 }' 00:19:18.736 05:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.736 05:01:48 -- common/autotest_common.sh@10 -- # set +x 00:19:19.684 05:01:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:19.684 [2024-04-27 05:01:49.504258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.684 [2024-04-27 05:01:49.504900] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:19.684 [2024-04-27 05:01:49.505038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:19.684 [2024-04-27 05:01:49.505261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:19.684 [2024-04-27 05:01:49.505762] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:19.684 [2024-04-27 05:01:49.505895] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:19.684 [2024-04-27 05:01:49.506188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.684 BaseBdev4 00:19:19.684 05:01:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:19.684 05:01:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:19.684 05:01:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:19.684 05:01:49 -- common/autotest_common.sh@889 -- # local i 00:19:19.684 05:01:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:19.684 05:01:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:19.684 05:01:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.942 05:01:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.200 [ 00:19:20.200 { 00:19:20.200 "name": "BaseBdev4", 00:19:20.200 "aliases": [ 00:19:20.200 "2c5a2e6f-9a96-4032-a1e9-94aa3ea1a4ab" 00:19:20.200 ], 00:19:20.200 "product_name": "Malloc disk", 00:19:20.200 "block_size": 512, 00:19:20.200 "num_blocks": 65536, 00:19:20.200 "uuid": "2c5a2e6f-9a96-4032-a1e9-94aa3ea1a4ab", 00:19:20.200 "assigned_rate_limits": { 00:19:20.200 "rw_ios_per_sec": 0, 00:19:20.200 "rw_mbytes_per_sec": 0, 00:19:20.200 "r_mbytes_per_sec": 0, 00:19:20.200 "w_mbytes_per_sec": 0 00:19:20.200 }, 00:19:20.200 "claimed": true, 00:19:20.200 "claim_type": "exclusive_write", 00:19:20.200 "zoned": false, 00:19:20.200 "supported_io_types": { 00:19:20.200 "read": true, 00:19:20.200 "write": true, 00:19:20.200 "unmap": true, 00:19:20.200 "write_zeroes": true, 00:19:20.200 "flush": true, 00:19:20.200 "reset": true, 00:19:20.200 "compare": false, 00:19:20.200 "compare_and_write": false, 00:19:20.200 "abort": true, 00:19:20.200 "nvme_admin": false, 00:19:20.200 "nvme_io": false 00:19:20.200 }, 00:19:20.200 "memory_domains": [ 00:19:20.200 { 00:19:20.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.200 "dma_device_type": 2 00:19:20.200 } 00:19:20.200 ], 00:19:20.200 "driver_specific": {} 00:19:20.200 } 00:19:20.200 ] 00:19:20.200 05:01:50 -- common/autotest_common.sh@895 -- # return 0 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.200 05:01:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.201 05:01:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.201 05:01:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.201 05:01:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.457 05:01:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.457 "name": "Existed_Raid", 00:19:20.457 "uuid": "a864fffe-9a33-4024-ab7b-f70a863be5c3", 00:19:20.457 "strip_size_kb": 64, 00:19:20.457 "state": "online", 00:19:20.457 "raid_level": "raid0", 00:19:20.457 "superblock": true, 00:19:20.457 "num_base_bdevs": 4, 00:19:20.457 "num_base_bdevs_discovered": 4, 00:19:20.457 "num_base_bdevs_operational": 4, 00:19:20.457 "base_bdevs_list": [ 00:19:20.457 { 00:19:20.457 "name": "BaseBdev1", 00:19:20.457 "uuid": "298c32e7-d31f-425e-8153-b8592e0b112b", 00:19:20.457 "is_configured": true, 00:19:20.457 "data_offset": 2048, 00:19:20.457 "data_size": 63488 00:19:20.457 }, 00:19:20.457 { 00:19:20.457 "name": "BaseBdev2", 00:19:20.457 "uuid": "64b8e158-4f1b-4d4a-844b-203231945982", 00:19:20.457 "is_configured": true, 00:19:20.457 "data_offset": 2048, 00:19:20.457 "data_size": 63488 00:19:20.457 }, 00:19:20.457 { 00:19:20.457 "name": "BaseBdev3", 00:19:20.457 "uuid": "2a8ea26a-5897-4eb1-9de5-bda26bf88ef7", 00:19:20.457 "is_configured": true, 00:19:20.457 "data_offset": 2048, 00:19:20.457 "data_size": 63488 00:19:20.457 }, 00:19:20.457 { 00:19:20.457 "name": "BaseBdev4", 00:19:20.457 "uuid": "2c5a2e6f-9a96-4032-a1e9-94aa3ea1a4ab", 00:19:20.457 "is_configured": true, 00:19:20.457 "data_offset": 2048, 00:19:20.457 "data_size": 63488 00:19:20.457 } 00:19:20.457 ] 00:19:20.457 }' 00:19:20.457 05:01:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.457 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:19:21.391 05:01:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:21.391 [2024-04-27 05:01:51.285250] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.391 [2024-04-27 05:01:51.285593] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.391 [2024-04-27 05:01:51.285801] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.649 05:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.908 05:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.908 "name": "Existed_Raid", 00:19:21.908 "uuid": "a864fffe-9a33-4024-ab7b-f70a863be5c3", 00:19:21.908 "strip_size_kb": 64, 00:19:21.908 "state": "offline", 00:19:21.908 "raid_level": "raid0", 00:19:21.908 "superblock": true, 00:19:21.908 "num_base_bdevs": 4, 00:19:21.908 "num_base_bdevs_discovered": 3, 00:19:21.908 "num_base_bdevs_operational": 3, 00:19:21.908 "base_bdevs_list": [ 00:19:21.908 { 00:19:21.908 "name": null, 00:19:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.908 "is_configured": false, 00:19:21.908 "data_offset": 2048, 00:19:21.908 "data_size": 63488 00:19:21.908 }, 00:19:21.908 { 00:19:21.908 "name": "BaseBdev2", 00:19:21.908 "uuid": "64b8e158-4f1b-4d4a-844b-203231945982", 00:19:21.908 "is_configured": true, 00:19:21.908 "data_offset": 2048, 00:19:21.908 "data_size": 63488 00:19:21.908 }, 00:19:21.908 { 00:19:21.908 "name": "BaseBdev3", 00:19:21.908 "uuid": "2a8ea26a-5897-4eb1-9de5-bda26bf88ef7", 00:19:21.908 "is_configured": true, 00:19:21.908 "data_offset": 2048, 00:19:21.908 "data_size": 63488 00:19:21.908 }, 00:19:21.908 { 00:19:21.908 "name": "BaseBdev4", 00:19:21.908 "uuid": "2c5a2e6f-9a96-4032-a1e9-94aa3ea1a4ab", 00:19:21.908 "is_configured": true, 00:19:21.908 "data_offset": 2048, 00:19:21.908 "data_size": 63488 00:19:21.908 } 00:19:21.908 ] 00:19:21.908 }' 00:19:21.908 05:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.908 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.475 05:01:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:22.475 05:01:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.475 05:01:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.475 05:01:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.733 05:01:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.733 05:01:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.733 05:01:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:22.990 [2024-04-27 05:01:52.717466] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.990 05:01:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:22.991 05:01:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.991 05:01:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.991 05:01:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.248 05:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.248 05:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.248 05:01:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:23.506 [2024-04-27 05:01:53.250519] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.506 05:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.506 05:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.506 05:01:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.506 05:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.763 05:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.763 05:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.763 05:01:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:24.021 [2024-04-27 05:01:53.800218] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:24.021 [2024-04-27 05:01:53.800661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:24.021 05:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:24.021 05:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:24.021 05:01:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.021 05:01:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:24.279 05:01:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:24.279 05:01:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:24.279 05:01:54 -- bdev/bdev_raid.sh@287 -- # killprocess 131141 00:19:24.279 05:01:54 -- common/autotest_common.sh@926 -- # '[' -z 131141 ']' 00:19:24.279 05:01:54 -- common/autotest_common.sh@930 -- # kill -0 131141 00:19:24.279 05:01:54 -- common/autotest_common.sh@931 -- # uname 00:19:24.279 05:01:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:24.279 05:01:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131141 00:19:24.279 killing process with pid 131141 00:19:24.279 05:01:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:24.279 05:01:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:24.279 05:01:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131141' 00:19:24.280 05:01:54 -- common/autotest_common.sh@945 -- # kill 131141 00:19:24.280 05:01:54 -- common/autotest_common.sh@950 -- # wait 131141 00:19:24.280 [2024-04-27 05:01:54.110326] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.280 [2024-04-27 05:01:54.110429] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:24.847 00:19:24.847 real 0m15.721s 00:19:24.847 user 0m28.412s 00:19:24.847 sys 0m2.345s 00:19:24.847 05:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.847 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.847 ************************************ 00:19:24.847 END TEST raid_state_function_test_sb 00:19:24.847 ************************************ 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:19:24.847 05:01:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:24.847 05:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:24.847 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.847 ************************************ 00:19:24.847 START TEST raid_superblock_test 00:19:24.847 ************************************ 00:19:24.847 05:01:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=131593 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:24.847 05:01:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131593 /var/tmp/spdk-raid.sock 00:19:24.847 05:01:54 -- common/autotest_common.sh@819 -- # '[' -z 131593 ']' 00:19:24.847 05:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:24.847 05:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:24.847 05:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:24.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:24.847 05:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:24.847 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.847 [2024-04-27 05:01:54.641811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:24.847 [2024-04-27 05:01:54.642339] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131593 ] 00:19:25.106 [2024-04-27 05:01:54.802879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.106 [2024-04-27 05:01:54.931081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.364 [2024-04-27 05:01:55.016915] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.930 05:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:25.930 05:01:55 -- common/autotest_common.sh@852 -- # return 0 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:25.930 05:01:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:26.188 malloc1 00:19:26.188 05:01:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.445 [2024-04-27 05:01:56.152027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.445 [2024-04-27 05:01:56.152488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.445 [2024-04-27 05:01:56.152691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:26.445 [2024-04-27 05:01:56.152902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.445 [2024-04-27 05:01:56.156251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.445 [2024-04-27 05:01:56.156441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.445 pt1 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.445 05:01:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:26.703 malloc2 00:19:26.703 05:01:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.960 [2024-04-27 05:01:56.676469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.960 [2024-04-27 05:01:56.676876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.960 [2024-04-27 05:01:56.676979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:26.961 [2024-04-27 05:01:56.677329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.961 [2024-04-27 05:01:56.680298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.961 [2024-04-27 05:01:56.680470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.961 pt2 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.961 05:01:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:27.218 malloc3 00:19:27.218 05:01:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:27.476 [2024-04-27 05:01:57.193395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:27.476 [2024-04-27 05:01:57.193829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.476 [2024-04-27 05:01:57.193935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:27.476 [2024-04-27 05:01:57.194208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.476 [2024-04-27 05:01:57.197216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.476 [2024-04-27 05:01:57.197400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:27.476 pt3 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.476 05:01:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:27.822 malloc4 00:19:27.822 05:01:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:28.080 [2024-04-27 05:01:57.710673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:28.080 [2024-04-27 05:01:57.711044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.080 [2024-04-27 05:01:57.711148] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:28.080 [2024-04-27 05:01:57.711409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.080 [2024-04-27 05:01:57.714924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.080 [2024-04-27 05:01:57.715132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:28.080 pt4 00:19:28.080 05:01:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:28.080 05:01:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:28.080 05:01:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:28.336 [2024-04-27 05:01:58.003786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.336 [2024-04-27 05:01:58.006648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.336 [2024-04-27 05:01:58.006909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.336 [2024-04-27 05:01:58.007051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:28.336 [2024-04-27 05:01:58.007421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:28.336 [2024-04-27 05:01:58.007543] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:28.336 [2024-04-27 05:01:58.007813] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:28.336 [2024-04-27 05:01:58.008430] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:28.336 [2024-04-27 05:01:58.008575] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:28.336 [2024-04-27 05:01:58.008951] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.336 05:01:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.593 05:01:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.593 "name": "raid_bdev1", 00:19:28.593 "uuid": "f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f", 00:19:28.593 "strip_size_kb": 64, 00:19:28.593 "state": "online", 00:19:28.593 "raid_level": "raid0", 00:19:28.593 "superblock": true, 00:19:28.593 "num_base_bdevs": 4, 00:19:28.593 "num_base_bdevs_discovered": 4, 00:19:28.593 "num_base_bdevs_operational": 4, 00:19:28.593 "base_bdevs_list": [ 00:19:28.593 { 00:19:28.593 "name": "pt1", 00:19:28.593 "uuid": "8db37fcf-9248-5c8f-a360-2ff83b86f5e1", 00:19:28.593 "is_configured": true, 00:19:28.593 "data_offset": 2048, 00:19:28.593 "data_size": 63488 00:19:28.593 }, 00:19:28.593 { 00:19:28.593 "name": "pt2", 00:19:28.593 "uuid": "b813b8d1-3c9c-59b5-8f1b-8bc6efee7b9f", 00:19:28.593 "is_configured": true, 00:19:28.593 "data_offset": 2048, 00:19:28.593 "data_size": 63488 00:19:28.593 }, 00:19:28.593 { 00:19:28.593 "name": "pt3", 00:19:28.593 "uuid": "949cd172-ba55-50a9-a536-f248b6dddc14", 00:19:28.593 "is_configured": true, 00:19:28.593 "data_offset": 2048, 00:19:28.593 "data_size": 63488 00:19:28.593 }, 00:19:28.593 { 00:19:28.593 "name": "pt4", 00:19:28.593 "uuid": "9550817b-73d4-523d-868f-8b22a2cb4f0f", 00:19:28.593 "is_configured": true, 00:19:28.593 "data_offset": 2048, 00:19:28.593 "data_size": 63488 00:19:28.593 } 00:19:28.593 ] 00:19:28.593 }' 00:19:28.593 05:01:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.593 05:01:58 -- common/autotest_common.sh@10 -- # set +x 00:19:29.159 05:01:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:29.159 05:01:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:29.417 [2024-04-27 05:01:59.184258] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.417 05:01:59 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f 00:19:29.417 05:01:59 -- bdev/bdev_raid.sh@380 -- # '[' -z f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f ']' 00:19:29.417 05:01:59 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.676 [2024-04-27 05:01:59.431977] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.676 [2024-04-27 05:01:59.432330] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.676 [2024-04-27 05:01:59.432620] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.676 [2024-04-27 05:01:59.432843] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.676 [2024-04-27 05:01:59.432971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:29.676 05:01:59 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:29.676 05:01:59 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.934 05:01:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:29.934 05:01:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:29.934 05:01:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.934 05:01:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:30.192 05:02:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.192 05:02:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:30.450 05:02:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.450 05:02:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:30.709 05:02:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.709 05:02:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:30.968 05:02:00 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:30.968 05:02:00 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:31.226 05:02:00 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:31.226 05:02:00 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.226 05:02:00 -- common/autotest_common.sh@640 -- # local es=0 00:19:31.226 05:02:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.226 05:02:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.226 05:02:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:31.226 05:02:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.226 05:02:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:31.226 05:02:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.226 05:02:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:31.226 05:02:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.226 05:02:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:31.226 05:02:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.483 [2024-04-27 05:02:01.260374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:31.483 [2024-04-27 05:02:01.263071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:31.483 [2024-04-27 05:02:01.263265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:31.483 [2024-04-27 05:02:01.263366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:31.483 [2024-04-27 05:02:01.263598] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:31.483 [2024-04-27 05:02:01.263849] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:31.483 [2024-04-27 05:02:01.264023] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:31.483 [2024-04-27 05:02:01.264239] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:31.483 [2024-04-27 05:02:01.264395] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.483 [2024-04-27 05:02:01.264514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:31.483 request: 00:19:31.483 { 00:19:31.483 "name": "raid_bdev1", 00:19:31.483 "raid_level": "raid0", 00:19:31.483 "base_bdevs": [ 00:19:31.483 "malloc1", 00:19:31.483 "malloc2", 00:19:31.483 "malloc3", 00:19:31.483 "malloc4" 00:19:31.483 ], 00:19:31.483 "superblock": false, 00:19:31.483 "strip_size_kb": 64, 00:19:31.483 "method": "bdev_raid_create", 00:19:31.483 "req_id": 1 00:19:31.483 } 00:19:31.483 Got JSON-RPC error response 00:19:31.483 response: 00:19:31.483 { 00:19:31.483 "code": -17, 00:19:31.483 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:31.483 } 00:19:31.483 05:02:01 -- common/autotest_common.sh@643 -- # es=1 00:19:31.483 05:02:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:31.483 05:02:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:31.483 05:02:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:31.483 05:02:01 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.483 05:02:01 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:31.741 05:02:01 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:31.741 05:02:01 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:31.741 05:02:01 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.000 [2024-04-27 05:02:01.749053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.000 [2024-04-27 05:02:01.749473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.000 [2024-04-27 05:02:01.749647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:32.000 [2024-04-27 05:02:01.749786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.000 [2024-04-27 05:02:01.752848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.000 [2024-04-27 05:02:01.753064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.000 [2024-04-27 05:02:01.753322] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:32.000 [2024-04-27 05:02:01.753507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.000 pt1 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.000 05:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.259 05:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.259 "name": "raid_bdev1", 00:19:32.259 "uuid": "f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f", 00:19:32.259 "strip_size_kb": 64, 00:19:32.259 "state": "configuring", 00:19:32.259 "raid_level": "raid0", 00:19:32.259 "superblock": true, 00:19:32.259 "num_base_bdevs": 4, 00:19:32.259 "num_base_bdevs_discovered": 1, 00:19:32.259 "num_base_bdevs_operational": 4, 00:19:32.259 "base_bdevs_list": [ 00:19:32.259 { 00:19:32.259 "name": "pt1", 00:19:32.259 "uuid": "8db37fcf-9248-5c8f-a360-2ff83b86f5e1", 00:19:32.259 "is_configured": true, 00:19:32.259 "data_offset": 2048, 00:19:32.259 "data_size": 63488 00:19:32.259 }, 00:19:32.259 { 00:19:32.259 "name": null, 00:19:32.259 "uuid": "b813b8d1-3c9c-59b5-8f1b-8bc6efee7b9f", 00:19:32.259 "is_configured": false, 00:19:32.259 "data_offset": 2048, 00:19:32.259 "data_size": 63488 00:19:32.259 }, 00:19:32.259 { 00:19:32.259 "name": null, 00:19:32.259 "uuid": "949cd172-ba55-50a9-a536-f248b6dddc14", 00:19:32.259 "is_configured": false, 00:19:32.259 "data_offset": 2048, 00:19:32.259 "data_size": 63488 00:19:32.259 }, 00:19:32.259 { 00:19:32.259 "name": null, 00:19:32.259 "uuid": "9550817b-73d4-523d-868f-8b22a2cb4f0f", 00:19:32.259 "is_configured": false, 00:19:32.259 "data_offset": 2048, 00:19:32.259 "data_size": 63488 00:19:32.259 } 00:19:32.259 ] 00:19:32.259 }' 00:19:32.259 05:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.259 05:02:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.825 05:02:02 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:32.825 05:02:02 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.089 [2024-04-27 05:02:02.865763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.089 [2024-04-27 05:02:02.866196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.089 [2024-04-27 05:02:02.866421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:33.089 [2024-04-27 05:02:02.866573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.089 [2024-04-27 05:02:02.867274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.089 [2024-04-27 05:02:02.867463] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.089 [2024-04-27 05:02:02.867705] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:33.089 [2024-04-27 05:02:02.867868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.089 pt2 00:19:33.089 05:02:02 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:33.348 [2024-04-27 05:02:03.105837] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.348 05:02:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.606 05:02:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.606 "name": "raid_bdev1", 00:19:33.606 "uuid": "f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f", 00:19:33.606 "strip_size_kb": 64, 00:19:33.606 "state": "configuring", 00:19:33.606 "raid_level": "raid0", 00:19:33.606 "superblock": true, 00:19:33.606 "num_base_bdevs": 4, 00:19:33.606 "num_base_bdevs_discovered": 1, 00:19:33.606 "num_base_bdevs_operational": 4, 00:19:33.606 "base_bdevs_list": [ 00:19:33.606 { 00:19:33.606 "name": "pt1", 00:19:33.606 "uuid": "8db37fcf-9248-5c8f-a360-2ff83b86f5e1", 00:19:33.606 "is_configured": true, 00:19:33.606 "data_offset": 2048, 00:19:33.606 "data_size": 63488 00:19:33.606 }, 00:19:33.606 { 00:19:33.606 "name": null, 00:19:33.606 "uuid": "b813b8d1-3c9c-59b5-8f1b-8bc6efee7b9f", 00:19:33.606 "is_configured": false, 00:19:33.606 "data_offset": 2048, 00:19:33.606 "data_size": 63488 00:19:33.606 }, 00:19:33.606 { 00:19:33.606 "name": null, 00:19:33.606 "uuid": "949cd172-ba55-50a9-a536-f248b6dddc14", 00:19:33.606 "is_configured": false, 00:19:33.606 "data_offset": 2048, 00:19:33.606 "data_size": 63488 00:19:33.606 }, 00:19:33.606 { 00:19:33.606 "name": null, 00:19:33.606 "uuid": "9550817b-73d4-523d-868f-8b22a2cb4f0f", 00:19:33.606 "is_configured": false, 00:19:33.606 "data_offset": 2048, 00:19:33.606 "data_size": 63488 00:19:33.606 } 00:19:33.606 ] 00:19:33.606 }' 00:19:33.606 05:02:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.606 05:02:03 -- common/autotest_common.sh@10 -- # set +x 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.547 [2024-04-27 05:02:04.314170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.547 [2024-04-27 05:02:04.314625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.547 [2024-04-27 05:02:04.314729] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:34.547 [2024-04-27 05:02:04.315002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.547 [2024-04-27 05:02:04.315638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.547 [2024-04-27 05:02:04.315851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.547 [2024-04-27 05:02:04.316095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:34.547 [2024-04-27 05:02:04.316243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.547 pt2 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.547 05:02:04 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:34.805 [2024-04-27 05:02:04.598208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:34.805 [2024-04-27 05:02:04.598650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.805 [2024-04-27 05:02:04.598743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:34.805 [2024-04-27 05:02:04.598964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.805 [2024-04-27 05:02:04.599587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.805 [2024-04-27 05:02:04.599796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:34.805 [2024-04-27 05:02:04.600033] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:34.805 [2024-04-27 05:02:04.600171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:34.805 pt3 00:19:34.805 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:34.805 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.805 05:02:04 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:35.064 [2024-04-27 05:02:04.878290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:35.064 [2024-04-27 05:02:04.878670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.064 [2024-04-27 05:02:04.878771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:35.064 [2024-04-27 05:02:04.879058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.064 [2024-04-27 05:02:04.879653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.064 [2024-04-27 05:02:04.879862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:35.064 [2024-04-27 05:02:04.880102] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:35.064 [2024-04-27 05:02:04.880248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:35.064 [2024-04-27 05:02:04.880547] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:35.064 [2024-04-27 05:02:04.880699] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:35.064 [2024-04-27 05:02:04.880839] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:35.064 [2024-04-27 05:02:04.881256] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:35.064 [2024-04-27 05:02:04.881388] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:35.064 [2024-04-27 05:02:04.881616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.064 pt4 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.064 05:02:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.322 05:02:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.322 "name": "raid_bdev1", 00:19:35.322 "uuid": "f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f", 00:19:35.322 "strip_size_kb": 64, 00:19:35.322 "state": "online", 00:19:35.322 "raid_level": "raid0", 00:19:35.322 "superblock": true, 00:19:35.322 "num_base_bdevs": 4, 00:19:35.322 "num_base_bdevs_discovered": 4, 00:19:35.322 "num_base_bdevs_operational": 4, 00:19:35.322 "base_bdevs_list": [ 00:19:35.322 { 00:19:35.322 "name": "pt1", 00:19:35.322 "uuid": "8db37fcf-9248-5c8f-a360-2ff83b86f5e1", 00:19:35.322 "is_configured": true, 00:19:35.322 "data_offset": 2048, 00:19:35.322 "data_size": 63488 00:19:35.322 }, 00:19:35.322 { 00:19:35.322 "name": "pt2", 00:19:35.322 "uuid": "b813b8d1-3c9c-59b5-8f1b-8bc6efee7b9f", 00:19:35.322 "is_configured": true, 00:19:35.322 "data_offset": 2048, 00:19:35.322 "data_size": 63488 00:19:35.322 }, 00:19:35.322 { 00:19:35.322 "name": "pt3", 00:19:35.322 "uuid": "949cd172-ba55-50a9-a536-f248b6dddc14", 00:19:35.322 "is_configured": true, 00:19:35.322 "data_offset": 2048, 00:19:35.322 "data_size": 63488 00:19:35.322 }, 00:19:35.322 { 00:19:35.322 "name": "pt4", 00:19:35.322 "uuid": "9550817b-73d4-523d-868f-8b22a2cb4f0f", 00:19:35.322 "is_configured": true, 00:19:35.322 "data_offset": 2048, 00:19:35.322 "data_size": 63488 00:19:35.322 } 00:19:35.322 ] 00:19:35.322 }' 00:19:35.322 05:02:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.322 05:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:36.258 05:02:05 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.258 05:02:05 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:36.258 [2024-04-27 05:02:06.030821] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.258 05:02:06 -- bdev/bdev_raid.sh@430 -- # '[' f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f '!=' f6476d93-7be9-4a41-b4a3-8b7e0a9feb9f ']' 00:19:36.258 05:02:06 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:36.258 05:02:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:36.258 05:02:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:36.258 05:02:06 -- bdev/bdev_raid.sh@511 -- # killprocess 131593 00:19:36.258 05:02:06 -- common/autotest_common.sh@926 -- # '[' -z 131593 ']' 00:19:36.258 05:02:06 -- common/autotest_common.sh@930 -- # kill -0 131593 00:19:36.258 05:02:06 -- common/autotest_common.sh@931 -- # uname 00:19:36.258 05:02:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:36.258 05:02:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131593 00:19:36.258 05:02:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:36.258 05:02:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:36.258 05:02:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131593' 00:19:36.258 killing process with pid 131593 00:19:36.258 05:02:06 -- common/autotest_common.sh@945 -- # kill 131593 00:19:36.258 [2024-04-27 05:02:06.079731] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.258 05:02:06 -- common/autotest_common.sh@950 -- # wait 131593 00:19:36.258 [2024-04-27 05:02:06.080071] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.258 [2024-04-27 05:02:06.080279] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.258 [2024-04-27 05:02:06.080398] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:36.517 [2024-04-27 05:02:06.176464] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:36.776 00:19:36.776 real 0m11.973s 00:19:36.776 user 0m21.568s 00:19:36.776 sys 0m1.580s 00:19:36.776 05:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.776 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.776 ************************************ 00:19:36.776 END TEST raid_superblock_test 00:19:36.776 ************************************ 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:36.776 05:02:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:36.776 05:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.776 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.776 ************************************ 00:19:36.776 START TEST raid_state_function_test 00:19:36.776 ************************************ 00:19:36.776 05:02:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=131923 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:36.776 Process raid pid: 131923 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131923' 00:19:36.776 05:02:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131923 /var/tmp/spdk-raid.sock 00:19:36.776 05:02:06 -- common/autotest_common.sh@819 -- # '[' -z 131923 ']' 00:19:36.776 05:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:36.776 05:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.776 05:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:36.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:36.776 05:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.776 05:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:37.035 [2024-04-27 05:02:06.690151] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:37.035 [2024-04-27 05:02:06.690744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.035 [2024-04-27 05:02:06.855389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.294 [2024-04-27 05:02:06.985764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.294 [2024-04-27 05:02:07.076486] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.862 05:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.862 05:02:07 -- common/autotest_common.sh@852 -- # return 0 00:19:37.862 05:02:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:38.120 [2024-04-27 05:02:07.901469] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.120 [2024-04-27 05:02:07.901880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.120 [2024-04-27 05:02:07.902006] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.120 [2024-04-27 05:02:07.902170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.120 [2024-04-27 05:02:07.902276] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.120 [2024-04-27 05:02:07.902435] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.120 [2024-04-27 05:02:07.902539] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:38.120 [2024-04-27 05:02:07.902684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.121 05:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.379 05:02:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.379 "name": "Existed_Raid", 00:19:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.379 "strip_size_kb": 64, 00:19:38.379 "state": "configuring", 00:19:38.379 "raid_level": "concat", 00:19:38.379 "superblock": false, 00:19:38.379 "num_base_bdevs": 4, 00:19:38.379 "num_base_bdevs_discovered": 0, 00:19:38.379 "num_base_bdevs_operational": 4, 00:19:38.379 "base_bdevs_list": [ 00:19:38.379 { 00:19:38.379 "name": "BaseBdev1", 00:19:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.379 "is_configured": false, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 0 00:19:38.379 }, 00:19:38.379 { 00:19:38.379 "name": "BaseBdev2", 00:19:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.379 "is_configured": false, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 0 00:19:38.379 }, 00:19:38.379 { 00:19:38.379 "name": "BaseBdev3", 00:19:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.379 "is_configured": false, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 0 00:19:38.379 }, 00:19:38.379 { 00:19:38.379 "name": "BaseBdev4", 00:19:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.379 "is_configured": false, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 0 00:19:38.379 } 00:19:38.379 ] 00:19:38.379 }' 00:19:38.379 05:02:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.379 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:38.946 05:02:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:39.203 [2024-04-27 05:02:09.061544] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.203 [2024-04-27 05:02:09.061887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:39.203 05:02:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:39.461 [2024-04-27 05:02:09.305658] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:39.461 [2024-04-27 05:02:09.306048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:39.461 [2024-04-27 05:02:09.306199] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.461 [2024-04-27 05:02:09.306337] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.461 [2024-04-27 05:02:09.306440] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.461 [2024-04-27 05:02:09.306527] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.461 [2024-04-27 05:02:09.306647] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:39.461 [2024-04-27 05:02:09.306783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:39.461 05:02:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.718 [2024-04-27 05:02:09.593650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.718 BaseBdev1 00:19:39.718 05:02:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:39.718 05:02:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:39.718 05:02:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:39.718 05:02:09 -- common/autotest_common.sh@889 -- # local i 00:19:39.718 05:02:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:39.718 05:02:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:39.718 05:02:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.976 05:02:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:40.235 [ 00:19:40.235 { 00:19:40.235 "name": "BaseBdev1", 00:19:40.235 "aliases": [ 00:19:40.235 "b911e835-3697-406e-98d0-294679d3edef" 00:19:40.235 ], 00:19:40.235 "product_name": "Malloc disk", 00:19:40.235 "block_size": 512, 00:19:40.235 "num_blocks": 65536, 00:19:40.235 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:40.235 "assigned_rate_limits": { 00:19:40.235 "rw_ios_per_sec": 0, 00:19:40.235 "rw_mbytes_per_sec": 0, 00:19:40.235 "r_mbytes_per_sec": 0, 00:19:40.235 "w_mbytes_per_sec": 0 00:19:40.235 }, 00:19:40.235 "claimed": true, 00:19:40.235 "claim_type": "exclusive_write", 00:19:40.235 "zoned": false, 00:19:40.235 "supported_io_types": { 00:19:40.235 "read": true, 00:19:40.235 "write": true, 00:19:40.235 "unmap": true, 00:19:40.235 "write_zeroes": true, 00:19:40.235 "flush": true, 00:19:40.235 "reset": true, 00:19:40.235 "compare": false, 00:19:40.235 "compare_and_write": false, 00:19:40.235 "abort": true, 00:19:40.235 "nvme_admin": false, 00:19:40.235 "nvme_io": false 00:19:40.235 }, 00:19:40.235 "memory_domains": [ 00:19:40.235 { 00:19:40.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.235 "dma_device_type": 2 00:19:40.235 } 00:19:40.235 ], 00:19:40.235 "driver_specific": {} 00:19:40.235 } 00:19:40.235 ] 00:19:40.235 05:02:10 -- common/autotest_common.sh@895 -- # return 0 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.235 05:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.494 05:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.494 "name": "Existed_Raid", 00:19:40.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.494 "strip_size_kb": 64, 00:19:40.494 "state": "configuring", 00:19:40.494 "raid_level": "concat", 00:19:40.494 "superblock": false, 00:19:40.494 "num_base_bdevs": 4, 00:19:40.494 "num_base_bdevs_discovered": 1, 00:19:40.494 "num_base_bdevs_operational": 4, 00:19:40.494 "base_bdevs_list": [ 00:19:40.494 { 00:19:40.494 "name": "BaseBdev1", 00:19:40.494 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:40.494 "is_configured": true, 00:19:40.494 "data_offset": 0, 00:19:40.494 "data_size": 65536 00:19:40.494 }, 00:19:40.494 { 00:19:40.494 "name": "BaseBdev2", 00:19:40.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.494 "is_configured": false, 00:19:40.494 "data_offset": 0, 00:19:40.494 "data_size": 0 00:19:40.494 }, 00:19:40.494 { 00:19:40.494 "name": "BaseBdev3", 00:19:40.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.494 "is_configured": false, 00:19:40.494 "data_offset": 0, 00:19:40.494 "data_size": 0 00:19:40.494 }, 00:19:40.494 { 00:19:40.494 "name": "BaseBdev4", 00:19:40.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.494 "is_configured": false, 00:19:40.494 "data_offset": 0, 00:19:40.494 "data_size": 0 00:19:40.494 } 00:19:40.494 ] 00:19:40.494 }' 00:19:40.494 05:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.494 05:02:10 -- common/autotest_common.sh@10 -- # set +x 00:19:41.429 05:02:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:41.429 [2024-04-27 05:02:11.266174] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.429 [2024-04-27 05:02:11.266567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:41.429 05:02:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:41.429 05:02:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:41.688 [2024-04-27 05:02:11.530336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.688 [2024-04-27 05:02:11.533161] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.688 [2024-04-27 05:02:11.533401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.688 [2024-04-27 05:02:11.533521] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:41.688 [2024-04-27 05:02:11.533599] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:41.688 [2024-04-27 05:02:11.533700] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:41.688 [2024-04-27 05:02:11.533855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.688 05:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.946 05:02:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.946 "name": "Existed_Raid", 00:19:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.946 "strip_size_kb": 64, 00:19:41.946 "state": "configuring", 00:19:41.946 "raid_level": "concat", 00:19:41.946 "superblock": false, 00:19:41.946 "num_base_bdevs": 4, 00:19:41.946 "num_base_bdevs_discovered": 1, 00:19:41.946 "num_base_bdevs_operational": 4, 00:19:41.946 "base_bdevs_list": [ 00:19:41.946 { 00:19:41.946 "name": "BaseBdev1", 00:19:41.946 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:41.946 "is_configured": true, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 65536 00:19:41.946 }, 00:19:41.946 { 00:19:41.946 "name": "BaseBdev2", 00:19:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.946 "is_configured": false, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 0 00:19:41.946 }, 00:19:41.946 { 00:19:41.946 "name": "BaseBdev3", 00:19:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.946 "is_configured": false, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 0 00:19:41.946 }, 00:19:41.946 { 00:19:41.946 "name": "BaseBdev4", 00:19:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.946 "is_configured": false, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 0 00:19:41.946 } 00:19:41.946 ] 00:19:41.946 }' 00:19:41.946 05:02:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.946 05:02:11 -- common/autotest_common.sh@10 -- # set +x 00:19:42.899 05:02:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:42.900 [2024-04-27 05:02:12.722214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.900 BaseBdev2 00:19:42.900 05:02:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:42.900 05:02:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:42.900 05:02:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:42.900 05:02:12 -- common/autotest_common.sh@889 -- # local i 00:19:42.900 05:02:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:42.900 05:02:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:42.900 05:02:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.158 05:02:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.416 [ 00:19:43.416 { 00:19:43.416 "name": "BaseBdev2", 00:19:43.416 "aliases": [ 00:19:43.416 "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f" 00:19:43.416 ], 00:19:43.416 "product_name": "Malloc disk", 00:19:43.416 "block_size": 512, 00:19:43.416 "num_blocks": 65536, 00:19:43.416 "uuid": "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f", 00:19:43.416 "assigned_rate_limits": { 00:19:43.416 "rw_ios_per_sec": 0, 00:19:43.416 "rw_mbytes_per_sec": 0, 00:19:43.416 "r_mbytes_per_sec": 0, 00:19:43.416 "w_mbytes_per_sec": 0 00:19:43.416 }, 00:19:43.416 "claimed": true, 00:19:43.416 "claim_type": "exclusive_write", 00:19:43.416 "zoned": false, 00:19:43.416 "supported_io_types": { 00:19:43.416 "read": true, 00:19:43.416 "write": true, 00:19:43.416 "unmap": true, 00:19:43.416 "write_zeroes": true, 00:19:43.416 "flush": true, 00:19:43.416 "reset": true, 00:19:43.416 "compare": false, 00:19:43.416 "compare_and_write": false, 00:19:43.416 "abort": true, 00:19:43.416 "nvme_admin": false, 00:19:43.416 "nvme_io": false 00:19:43.416 }, 00:19:43.416 "memory_domains": [ 00:19:43.416 { 00:19:43.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.416 "dma_device_type": 2 00:19:43.416 } 00:19:43.416 ], 00:19:43.416 "driver_specific": {} 00:19:43.416 } 00:19:43.416 ] 00:19:43.416 05:02:13 -- common/autotest_common.sh@895 -- # return 0 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.416 05:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.675 05:02:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.675 "name": "Existed_Raid", 00:19:43.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.675 "strip_size_kb": 64, 00:19:43.675 "state": "configuring", 00:19:43.675 "raid_level": "concat", 00:19:43.675 "superblock": false, 00:19:43.675 "num_base_bdevs": 4, 00:19:43.675 "num_base_bdevs_discovered": 2, 00:19:43.675 "num_base_bdevs_operational": 4, 00:19:43.675 "base_bdevs_list": [ 00:19:43.675 { 00:19:43.675 "name": "BaseBdev1", 00:19:43.675 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:43.675 "is_configured": true, 00:19:43.675 "data_offset": 0, 00:19:43.675 "data_size": 65536 00:19:43.675 }, 00:19:43.675 { 00:19:43.675 "name": "BaseBdev2", 00:19:43.675 "uuid": "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f", 00:19:43.675 "is_configured": true, 00:19:43.675 "data_offset": 0, 00:19:43.675 "data_size": 65536 00:19:43.675 }, 00:19:43.675 { 00:19:43.675 "name": "BaseBdev3", 00:19:43.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.675 "is_configured": false, 00:19:43.675 "data_offset": 0, 00:19:43.675 "data_size": 0 00:19:43.675 }, 00:19:43.675 { 00:19:43.675 "name": "BaseBdev4", 00:19:43.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.675 "is_configured": false, 00:19:43.675 "data_offset": 0, 00:19:43.675 "data_size": 0 00:19:43.675 } 00:19:43.675 ] 00:19:43.675 }' 00:19:43.675 05:02:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.675 05:02:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.610 05:02:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:44.610 [2024-04-27 05:02:14.427794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.610 BaseBdev3 00:19:44.610 05:02:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:44.610 05:02:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:44.610 05:02:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:44.610 05:02:14 -- common/autotest_common.sh@889 -- # local i 00:19:44.610 05:02:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:44.610 05:02:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:44.610 05:02:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.868 05:02:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:45.127 [ 00:19:45.127 { 00:19:45.127 "name": "BaseBdev3", 00:19:45.127 "aliases": [ 00:19:45.127 "c2ed1f12-55e8-4615-99e4-911a2ebd265f" 00:19:45.127 ], 00:19:45.127 "product_name": "Malloc disk", 00:19:45.127 "block_size": 512, 00:19:45.127 "num_blocks": 65536, 00:19:45.127 "uuid": "c2ed1f12-55e8-4615-99e4-911a2ebd265f", 00:19:45.127 "assigned_rate_limits": { 00:19:45.127 "rw_ios_per_sec": 0, 00:19:45.127 "rw_mbytes_per_sec": 0, 00:19:45.127 "r_mbytes_per_sec": 0, 00:19:45.127 "w_mbytes_per_sec": 0 00:19:45.127 }, 00:19:45.127 "claimed": true, 00:19:45.127 "claim_type": "exclusive_write", 00:19:45.127 "zoned": false, 00:19:45.127 "supported_io_types": { 00:19:45.127 "read": true, 00:19:45.127 "write": true, 00:19:45.127 "unmap": true, 00:19:45.127 "write_zeroes": true, 00:19:45.127 "flush": true, 00:19:45.127 "reset": true, 00:19:45.127 "compare": false, 00:19:45.127 "compare_and_write": false, 00:19:45.127 "abort": true, 00:19:45.127 "nvme_admin": false, 00:19:45.127 "nvme_io": false 00:19:45.127 }, 00:19:45.127 "memory_domains": [ 00:19:45.127 { 00:19:45.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.127 "dma_device_type": 2 00:19:45.127 } 00:19:45.127 ], 00:19:45.127 "driver_specific": {} 00:19:45.127 } 00:19:45.127 ] 00:19:45.127 05:02:14 -- common/autotest_common.sh@895 -- # return 0 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.127 05:02:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.385 05:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.385 "name": "Existed_Raid", 00:19:45.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.385 "strip_size_kb": 64, 00:19:45.385 "state": "configuring", 00:19:45.385 "raid_level": "concat", 00:19:45.385 "superblock": false, 00:19:45.385 "num_base_bdevs": 4, 00:19:45.385 "num_base_bdevs_discovered": 3, 00:19:45.385 "num_base_bdevs_operational": 4, 00:19:45.385 "base_bdevs_list": [ 00:19:45.385 { 00:19:45.385 "name": "BaseBdev1", 00:19:45.385 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:45.385 "is_configured": true, 00:19:45.385 "data_offset": 0, 00:19:45.385 "data_size": 65536 00:19:45.385 }, 00:19:45.385 { 00:19:45.385 "name": "BaseBdev2", 00:19:45.385 "uuid": "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f", 00:19:45.385 "is_configured": true, 00:19:45.385 "data_offset": 0, 00:19:45.385 "data_size": 65536 00:19:45.385 }, 00:19:45.385 { 00:19:45.385 "name": "BaseBdev3", 00:19:45.385 "uuid": "c2ed1f12-55e8-4615-99e4-911a2ebd265f", 00:19:45.385 "is_configured": true, 00:19:45.385 "data_offset": 0, 00:19:45.385 "data_size": 65536 00:19:45.385 }, 00:19:45.385 { 00:19:45.385 "name": "BaseBdev4", 00:19:45.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.385 "is_configured": false, 00:19:45.385 "data_offset": 0, 00:19:45.385 "data_size": 0 00:19:45.385 } 00:19:45.385 ] 00:19:45.385 }' 00:19:45.385 05:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.385 05:02:15 -- common/autotest_common.sh@10 -- # set +x 00:19:46.321 05:02:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:46.321 [2024-04-27 05:02:16.122081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.321 [2024-04-27 05:02:16.122476] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:46.321 [2024-04-27 05:02:16.122527] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:46.321 [2024-04-27 05:02:16.122858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:46.321 [2024-04-27 05:02:16.123445] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:46.321 [2024-04-27 05:02:16.123571] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:46.321 [2024-04-27 05:02:16.123982] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.321 BaseBdev4 00:19:46.321 05:02:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:46.321 05:02:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:46.321 05:02:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:46.321 05:02:16 -- common/autotest_common.sh@889 -- # local i 00:19:46.321 05:02:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:46.321 05:02:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:46.321 05:02:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.578 05:02:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:46.836 [ 00:19:46.836 { 00:19:46.836 "name": "BaseBdev4", 00:19:46.836 "aliases": [ 00:19:46.836 "9ebc034a-db25-4fed-a243-9bb3d60748d7" 00:19:46.836 ], 00:19:46.836 "product_name": "Malloc disk", 00:19:46.836 "block_size": 512, 00:19:46.836 "num_blocks": 65536, 00:19:46.836 "uuid": "9ebc034a-db25-4fed-a243-9bb3d60748d7", 00:19:46.836 "assigned_rate_limits": { 00:19:46.836 "rw_ios_per_sec": 0, 00:19:46.836 "rw_mbytes_per_sec": 0, 00:19:46.836 "r_mbytes_per_sec": 0, 00:19:46.836 "w_mbytes_per_sec": 0 00:19:46.836 }, 00:19:46.836 "claimed": true, 00:19:46.836 "claim_type": "exclusive_write", 00:19:46.836 "zoned": false, 00:19:46.836 "supported_io_types": { 00:19:46.836 "read": true, 00:19:46.836 "write": true, 00:19:46.836 "unmap": true, 00:19:46.836 "write_zeroes": true, 00:19:46.836 "flush": true, 00:19:46.836 "reset": true, 00:19:46.836 "compare": false, 00:19:46.836 "compare_and_write": false, 00:19:46.836 "abort": true, 00:19:46.836 "nvme_admin": false, 00:19:46.836 "nvme_io": false 00:19:46.836 }, 00:19:46.836 "memory_domains": [ 00:19:46.836 { 00:19:46.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.836 "dma_device_type": 2 00:19:46.836 } 00:19:46.836 ], 00:19:46.836 "driver_specific": {} 00:19:46.836 } 00:19:46.836 ] 00:19:46.836 05:02:16 -- common/autotest_common.sh@895 -- # return 0 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.836 05:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.094 05:02:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.094 "name": "Existed_Raid", 00:19:47.094 "uuid": "49c1e42c-16ef-4ff7-815d-306de83c6bcf", 00:19:47.094 "strip_size_kb": 64, 00:19:47.094 "state": "online", 00:19:47.094 "raid_level": "concat", 00:19:47.094 "superblock": false, 00:19:47.094 "num_base_bdevs": 4, 00:19:47.094 "num_base_bdevs_discovered": 4, 00:19:47.094 "num_base_bdevs_operational": 4, 00:19:47.094 "base_bdevs_list": [ 00:19:47.094 { 00:19:47.094 "name": "BaseBdev1", 00:19:47.094 "uuid": "b911e835-3697-406e-98d0-294679d3edef", 00:19:47.094 "is_configured": true, 00:19:47.094 "data_offset": 0, 00:19:47.094 "data_size": 65536 00:19:47.094 }, 00:19:47.094 { 00:19:47.094 "name": "BaseBdev2", 00:19:47.094 "uuid": "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f", 00:19:47.094 "is_configured": true, 00:19:47.094 "data_offset": 0, 00:19:47.094 "data_size": 65536 00:19:47.094 }, 00:19:47.094 { 00:19:47.094 "name": "BaseBdev3", 00:19:47.094 "uuid": "c2ed1f12-55e8-4615-99e4-911a2ebd265f", 00:19:47.094 "is_configured": true, 00:19:47.094 "data_offset": 0, 00:19:47.094 "data_size": 65536 00:19:47.094 }, 00:19:47.094 { 00:19:47.094 "name": "BaseBdev4", 00:19:47.094 "uuid": "9ebc034a-db25-4fed-a243-9bb3d60748d7", 00:19:47.094 "is_configured": true, 00:19:47.094 "data_offset": 0, 00:19:47.094 "data_size": 65536 00:19:47.094 } 00:19:47.094 ] 00:19:47.095 }' 00:19:47.095 05:02:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.095 05:02:16 -- common/autotest_common.sh@10 -- # set +x 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:48.029 [2024-04-27 05:02:17.846814] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.029 [2024-04-27 05:02:17.847159] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.029 [2024-04-27 05:02:17.847375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.029 05:02:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.288 05:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.288 "name": "Existed_Raid", 00:19:48.288 "uuid": "49c1e42c-16ef-4ff7-815d-306de83c6bcf", 00:19:48.288 "strip_size_kb": 64, 00:19:48.288 "state": "offline", 00:19:48.288 "raid_level": "concat", 00:19:48.288 "superblock": false, 00:19:48.288 "num_base_bdevs": 4, 00:19:48.288 "num_base_bdevs_discovered": 3, 00:19:48.288 "num_base_bdevs_operational": 3, 00:19:48.288 "base_bdevs_list": [ 00:19:48.288 { 00:19:48.288 "name": null, 00:19:48.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.288 "is_configured": false, 00:19:48.288 "data_offset": 0, 00:19:48.288 "data_size": 65536 00:19:48.288 }, 00:19:48.288 { 00:19:48.288 "name": "BaseBdev2", 00:19:48.288 "uuid": "38ca3544-bbbf-4ac7-bb7f-66de759b5c3f", 00:19:48.288 "is_configured": true, 00:19:48.288 "data_offset": 0, 00:19:48.288 "data_size": 65536 00:19:48.288 }, 00:19:48.288 { 00:19:48.288 "name": "BaseBdev3", 00:19:48.288 "uuid": "c2ed1f12-55e8-4615-99e4-911a2ebd265f", 00:19:48.288 "is_configured": true, 00:19:48.288 "data_offset": 0, 00:19:48.288 "data_size": 65536 00:19:48.288 }, 00:19:48.288 { 00:19:48.288 "name": "BaseBdev4", 00:19:48.288 "uuid": "9ebc034a-db25-4fed-a243-9bb3d60748d7", 00:19:48.288 "is_configured": true, 00:19:48.288 "data_offset": 0, 00:19:48.288 "data_size": 65536 00:19:48.288 } 00:19:48.288 ] 00:19:48.288 }' 00:19:48.288 05:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.288 05:02:18 -- common/autotest_common.sh@10 -- # set +x 00:19:49.303 05:02:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:49.303 05:02:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:49.303 05:02:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.303 05:02:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:49.303 05:02:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:49.303 05:02:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.303 05:02:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:49.592 [2024-04-27 05:02:19.383489] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.592 05:02:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:49.592 05:02:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:49.592 05:02:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.592 05:02:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:49.849 05:02:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:49.849 05:02:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.849 05:02:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:50.107 [2024-04-27 05:02:19.981240] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:50.365 05:02:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:50.365 05:02:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:50.365 05:02:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.365 05:02:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:50.623 05:02:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:50.623 05:02:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:50.623 05:02:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:50.623 [2024-04-27 05:02:20.513509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:50.623 [2024-04-27 05:02:20.513914] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:50.881 05:02:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:50.881 05:02:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:50.881 05:02:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.881 05:02:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:51.140 05:02:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:51.140 05:02:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:51.140 05:02:20 -- bdev/bdev_raid.sh@287 -- # killprocess 131923 00:19:51.140 05:02:20 -- common/autotest_common.sh@926 -- # '[' -z 131923 ']' 00:19:51.140 05:02:20 -- common/autotest_common.sh@930 -- # kill -0 131923 00:19:51.140 05:02:20 -- common/autotest_common.sh@931 -- # uname 00:19:51.140 05:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:51.140 05:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131923 00:19:51.140 05:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:51.140 05:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:51.140 05:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131923' 00:19:51.140 killing process with pid 131923 00:19:51.140 05:02:20 -- common/autotest_common.sh@945 -- # kill 131923 00:19:51.140 05:02:20 -- common/autotest_common.sh@950 -- # wait 131923 00:19:51.140 [2024-04-27 05:02:20.812771] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.140 [2024-04-27 05:02:20.813127] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:51.399 00:19:51.399 real 0m14.453s 00:19:51.399 user 0m26.504s 00:19:51.399 sys 0m1.963s 00:19:51.399 05:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.399 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:19:51.399 ************************************ 00:19:51.399 END TEST raid_state_function_test 00:19:51.399 ************************************ 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:51.399 05:02:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:51.399 05:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:51.399 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:19:51.399 ************************************ 00:19:51.399 START TEST raid_state_function_test_sb 00:19:51.399 ************************************ 00:19:51.399 05:02:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=132361 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:51.399 Process raid pid: 132361 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132361' 00:19:51.399 05:02:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132361 /var/tmp/spdk-raid.sock 00:19:51.399 05:02:21 -- common/autotest_common.sh@819 -- # '[' -z 132361 ']' 00:19:51.399 05:02:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.399 05:02:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.399 05:02:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.399 05:02:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.399 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:19:51.399 [2024-04-27 05:02:21.204413] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:51.399 [2024-04-27 05:02:21.204954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.658 [2024-04-27 05:02:21.374461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.658 [2024-04-27 05:02:21.510667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.916 [2024-04-27 05:02:21.602830] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.483 05:02:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:52.483 05:02:22 -- common/autotest_common.sh@852 -- # return 0 00:19:52.483 05:02:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:52.742 [2024-04-27 05:02:22.409528] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.742 [2024-04-27 05:02:22.409848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.742 [2024-04-27 05:02:22.409975] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.742 [2024-04-27 05:02:22.410050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.742 [2024-04-27 05:02:22.410094] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:52.742 [2024-04-27 05:02:22.410178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:52.742 [2024-04-27 05:02:22.410218] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:52.742 [2024-04-27 05:02:22.410270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.742 05:02:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.001 05:02:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.001 "name": "Existed_Raid", 00:19:53.001 "uuid": "355c9d16-511a-4c82-9672-4e68c4f0f3d4", 00:19:53.001 "strip_size_kb": 64, 00:19:53.001 "state": "configuring", 00:19:53.001 "raid_level": "concat", 00:19:53.001 "superblock": true, 00:19:53.001 "num_base_bdevs": 4, 00:19:53.001 "num_base_bdevs_discovered": 0, 00:19:53.001 "num_base_bdevs_operational": 4, 00:19:53.001 "base_bdevs_list": [ 00:19:53.001 { 00:19:53.001 "name": "BaseBdev1", 00:19:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.001 "is_configured": false, 00:19:53.001 "data_offset": 0, 00:19:53.001 "data_size": 0 00:19:53.001 }, 00:19:53.001 { 00:19:53.001 "name": "BaseBdev2", 00:19:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.001 "is_configured": false, 00:19:53.001 "data_offset": 0, 00:19:53.001 "data_size": 0 00:19:53.001 }, 00:19:53.001 { 00:19:53.001 "name": "BaseBdev3", 00:19:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.001 "is_configured": false, 00:19:53.001 "data_offset": 0, 00:19:53.001 "data_size": 0 00:19:53.001 }, 00:19:53.001 { 00:19:53.001 "name": "BaseBdev4", 00:19:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.001 "is_configured": false, 00:19:53.001 "data_offset": 0, 00:19:53.001 "data_size": 0 00:19:53.001 } 00:19:53.001 ] 00:19:53.001 }' 00:19:53.001 05:02:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.001 05:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:53.564 05:02:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:53.822 [2024-04-27 05:02:23.617644] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:53.822 [2024-04-27 05:02:23.618040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:53.822 05:02:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:54.080 [2024-04-27 05:02:23.837791] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.080 [2024-04-27 05:02:23.838199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.080 [2024-04-27 05:02:23.838359] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.080 [2024-04-27 05:02:23.838440] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.080 [2024-04-27 05:02:23.838735] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.080 [2024-04-27 05:02:23.838827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.080 [2024-04-27 05:02:23.838938] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:54.080 [2024-04-27 05:02:23.839007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:54.080 05:02:23 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.337 [2024-04-27 05:02:24.125827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.337 BaseBdev1 00:19:54.337 05:02:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:54.337 05:02:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:54.337 05:02:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:54.337 05:02:24 -- common/autotest_common.sh@889 -- # local i 00:19:54.337 05:02:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:54.337 05:02:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:54.337 05:02:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.595 05:02:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:54.852 [ 00:19:54.852 { 00:19:54.852 "name": "BaseBdev1", 00:19:54.852 "aliases": [ 00:19:54.852 "3a9e3360-fce2-4b5c-9003-5efffc06c68c" 00:19:54.852 ], 00:19:54.852 "product_name": "Malloc disk", 00:19:54.852 "block_size": 512, 00:19:54.852 "num_blocks": 65536, 00:19:54.852 "uuid": "3a9e3360-fce2-4b5c-9003-5efffc06c68c", 00:19:54.852 "assigned_rate_limits": { 00:19:54.852 "rw_ios_per_sec": 0, 00:19:54.852 "rw_mbytes_per_sec": 0, 00:19:54.852 "r_mbytes_per_sec": 0, 00:19:54.852 "w_mbytes_per_sec": 0 00:19:54.852 }, 00:19:54.852 "claimed": true, 00:19:54.852 "claim_type": "exclusive_write", 00:19:54.852 "zoned": false, 00:19:54.852 "supported_io_types": { 00:19:54.852 "read": true, 00:19:54.852 "write": true, 00:19:54.852 "unmap": true, 00:19:54.852 "write_zeroes": true, 00:19:54.852 "flush": true, 00:19:54.852 "reset": true, 00:19:54.852 "compare": false, 00:19:54.852 "compare_and_write": false, 00:19:54.852 "abort": true, 00:19:54.852 "nvme_admin": false, 00:19:54.852 "nvme_io": false 00:19:54.852 }, 00:19:54.852 "memory_domains": [ 00:19:54.852 { 00:19:54.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.852 "dma_device_type": 2 00:19:54.852 } 00:19:54.852 ], 00:19:54.852 "driver_specific": {} 00:19:54.852 } 00:19:54.852 ] 00:19:54.852 05:02:24 -- common/autotest_common.sh@895 -- # return 0 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.852 05:02:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.138 05:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.138 "name": "Existed_Raid", 00:19:55.138 "uuid": "c4926379-320a-42c3-97eb-f62bcafc54bb", 00:19:55.138 "strip_size_kb": 64, 00:19:55.138 "state": "configuring", 00:19:55.138 "raid_level": "concat", 00:19:55.138 "superblock": true, 00:19:55.138 "num_base_bdevs": 4, 00:19:55.138 "num_base_bdevs_discovered": 1, 00:19:55.138 "num_base_bdevs_operational": 4, 00:19:55.138 "base_bdevs_list": [ 00:19:55.138 { 00:19:55.138 "name": "BaseBdev1", 00:19:55.138 "uuid": "3a9e3360-fce2-4b5c-9003-5efffc06c68c", 00:19:55.138 "is_configured": true, 00:19:55.138 "data_offset": 2048, 00:19:55.138 "data_size": 63488 00:19:55.138 }, 00:19:55.138 { 00:19:55.138 "name": "BaseBdev2", 00:19:55.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.138 "is_configured": false, 00:19:55.138 "data_offset": 0, 00:19:55.138 "data_size": 0 00:19:55.138 }, 00:19:55.138 { 00:19:55.138 "name": "BaseBdev3", 00:19:55.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.138 "is_configured": false, 00:19:55.138 "data_offset": 0, 00:19:55.138 "data_size": 0 00:19:55.138 }, 00:19:55.138 { 00:19:55.138 "name": "BaseBdev4", 00:19:55.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.138 "is_configured": false, 00:19:55.138 "data_offset": 0, 00:19:55.138 "data_size": 0 00:19:55.138 } 00:19:55.138 ] 00:19:55.138 }' 00:19:55.138 05:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.138 05:02:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.703 05:02:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:55.960 [2024-04-27 05:02:25.818329] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:55.960 [2024-04-27 05:02:25.818708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:55.960 05:02:25 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:55.960 05:02:25 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:56.217 05:02:26 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:56.475 BaseBdev1 00:19:56.475 05:02:26 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:56.475 05:02:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:56.475 05:02:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:56.475 05:02:26 -- common/autotest_common.sh@889 -- # local i 00:19:56.475 05:02:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:56.475 05:02:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:56.475 05:02:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:56.733 05:02:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:56.991 [ 00:19:56.991 { 00:19:56.991 "name": "BaseBdev1", 00:19:56.991 "aliases": [ 00:19:56.991 "dca6fec6-68e9-465a-a168-ce41f7d8d689" 00:19:56.991 ], 00:19:56.991 "product_name": "Malloc disk", 00:19:56.991 "block_size": 512, 00:19:56.991 "num_blocks": 65536, 00:19:56.991 "uuid": "dca6fec6-68e9-465a-a168-ce41f7d8d689", 00:19:56.991 "assigned_rate_limits": { 00:19:56.991 "rw_ios_per_sec": 0, 00:19:56.991 "rw_mbytes_per_sec": 0, 00:19:56.991 "r_mbytes_per_sec": 0, 00:19:56.991 "w_mbytes_per_sec": 0 00:19:56.991 }, 00:19:56.991 "claimed": false, 00:19:56.991 "zoned": false, 00:19:56.991 "supported_io_types": { 00:19:56.991 "read": true, 00:19:56.991 "write": true, 00:19:56.991 "unmap": true, 00:19:56.991 "write_zeroes": true, 00:19:56.991 "flush": true, 00:19:56.991 "reset": true, 00:19:56.991 "compare": false, 00:19:56.991 "compare_and_write": false, 00:19:56.991 "abort": true, 00:19:56.991 "nvme_admin": false, 00:19:56.991 "nvme_io": false 00:19:56.991 }, 00:19:56.991 "memory_domains": [ 00:19:56.991 { 00:19:56.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.991 "dma_device_type": 2 00:19:56.991 } 00:19:56.991 ], 00:19:56.991 "driver_specific": {} 00:19:56.991 } 00:19:56.991 ] 00:19:56.991 05:02:26 -- common/autotest_common.sh@895 -- # return 0 00:19:56.991 05:02:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:57.249 [2024-04-27 05:02:27.081655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.249 [2024-04-27 05:02:27.084954] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.249 [2024-04-27 05:02:27.085304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.249 [2024-04-27 05:02:27.085436] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.249 [2024-04-27 05:02:27.085512] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.249 [2024-04-27 05:02:27.085696] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:57.249 [2024-04-27 05:02:27.085767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.249 05:02:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.508 05:02:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.508 "name": "Existed_Raid", 00:19:57.508 "uuid": "9857dfce-4336-428b-a58a-69da1a2942fa", 00:19:57.508 "strip_size_kb": 64, 00:19:57.508 "state": "configuring", 00:19:57.508 "raid_level": "concat", 00:19:57.508 "superblock": true, 00:19:57.508 "num_base_bdevs": 4, 00:19:57.508 "num_base_bdevs_discovered": 1, 00:19:57.508 "num_base_bdevs_operational": 4, 00:19:57.508 "base_bdevs_list": [ 00:19:57.508 { 00:19:57.508 "name": "BaseBdev1", 00:19:57.508 "uuid": "dca6fec6-68e9-465a-a168-ce41f7d8d689", 00:19:57.508 "is_configured": true, 00:19:57.508 "data_offset": 2048, 00:19:57.508 "data_size": 63488 00:19:57.508 }, 00:19:57.508 { 00:19:57.508 "name": "BaseBdev2", 00:19:57.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.508 "is_configured": false, 00:19:57.508 "data_offset": 0, 00:19:57.508 "data_size": 0 00:19:57.508 }, 00:19:57.508 { 00:19:57.508 "name": "BaseBdev3", 00:19:57.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.508 "is_configured": false, 00:19:57.508 "data_offset": 0, 00:19:57.508 "data_size": 0 00:19:57.508 }, 00:19:57.508 { 00:19:57.508 "name": "BaseBdev4", 00:19:57.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.508 "is_configured": false, 00:19:57.508 "data_offset": 0, 00:19:57.508 "data_size": 0 00:19:57.508 } 00:19:57.508 ] 00:19:57.508 }' 00:19:57.508 05:02:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.508 05:02:27 -- common/autotest_common.sh@10 -- # set +x 00:19:58.102 05:02:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:58.360 [2024-04-27 05:02:28.186152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.360 BaseBdev2 00:19:58.360 05:02:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:58.360 05:02:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:58.360 05:02:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:58.360 05:02:28 -- common/autotest_common.sh@889 -- # local i 00:19:58.361 05:02:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:58.361 05:02:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:58.361 05:02:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.619 05:02:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:58.878 [ 00:19:58.878 { 00:19:58.878 "name": "BaseBdev2", 00:19:58.878 "aliases": [ 00:19:58.878 "ab78c40d-3118-4092-93db-e5ec50010c2b" 00:19:58.878 ], 00:19:58.878 "product_name": "Malloc disk", 00:19:58.878 "block_size": 512, 00:19:58.878 "num_blocks": 65536, 00:19:58.878 "uuid": "ab78c40d-3118-4092-93db-e5ec50010c2b", 00:19:58.878 "assigned_rate_limits": { 00:19:58.878 "rw_ios_per_sec": 0, 00:19:58.878 "rw_mbytes_per_sec": 0, 00:19:58.878 "r_mbytes_per_sec": 0, 00:19:58.878 "w_mbytes_per_sec": 0 00:19:58.878 }, 00:19:58.878 "claimed": true, 00:19:58.878 "claim_type": "exclusive_write", 00:19:58.878 "zoned": false, 00:19:58.878 "supported_io_types": { 00:19:58.878 "read": true, 00:19:58.878 "write": true, 00:19:58.878 "unmap": true, 00:19:58.878 "write_zeroes": true, 00:19:58.878 "flush": true, 00:19:58.878 "reset": true, 00:19:58.878 "compare": false, 00:19:58.878 "compare_and_write": false, 00:19:58.878 "abort": true, 00:19:58.878 "nvme_admin": false, 00:19:58.878 "nvme_io": false 00:19:58.878 }, 00:19:58.878 "memory_domains": [ 00:19:58.878 { 00:19:58.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.878 "dma_device_type": 2 00:19:58.878 } 00:19:58.878 ], 00:19:58.878 "driver_specific": {} 00:19:58.878 } 00:19:58.878 ] 00:19:58.878 05:02:28 -- common/autotest_common.sh@895 -- # return 0 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.878 05:02:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.137 05:02:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.137 "name": "Existed_Raid", 00:19:59.137 "uuid": "9857dfce-4336-428b-a58a-69da1a2942fa", 00:19:59.137 "strip_size_kb": 64, 00:19:59.137 "state": "configuring", 00:19:59.137 "raid_level": "concat", 00:19:59.137 "superblock": true, 00:19:59.137 "num_base_bdevs": 4, 00:19:59.137 "num_base_bdevs_discovered": 2, 00:19:59.137 "num_base_bdevs_operational": 4, 00:19:59.137 "base_bdevs_list": [ 00:19:59.137 { 00:19:59.137 "name": "BaseBdev1", 00:19:59.137 "uuid": "dca6fec6-68e9-465a-a168-ce41f7d8d689", 00:19:59.137 "is_configured": true, 00:19:59.137 "data_offset": 2048, 00:19:59.137 "data_size": 63488 00:19:59.137 }, 00:19:59.137 { 00:19:59.137 "name": "BaseBdev2", 00:19:59.137 "uuid": "ab78c40d-3118-4092-93db-e5ec50010c2b", 00:19:59.137 "is_configured": true, 00:19:59.137 "data_offset": 2048, 00:19:59.137 "data_size": 63488 00:19:59.137 }, 00:19:59.137 { 00:19:59.137 "name": "BaseBdev3", 00:19:59.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.137 "is_configured": false, 00:19:59.137 "data_offset": 0, 00:19:59.137 "data_size": 0 00:19:59.137 }, 00:19:59.137 { 00:19:59.137 "name": "BaseBdev4", 00:19:59.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.137 "is_configured": false, 00:19:59.137 "data_offset": 0, 00:19:59.137 "data_size": 0 00:19:59.137 } 00:19:59.137 ] 00:19:59.137 }' 00:19:59.137 05:02:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.137 05:02:28 -- common/autotest_common.sh@10 -- # set +x 00:19:59.704 05:02:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:59.963 [2024-04-27 05:02:29.783973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:59.963 BaseBdev3 00:19:59.963 05:02:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:59.963 05:02:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:59.963 05:02:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:59.963 05:02:29 -- common/autotest_common.sh@889 -- # local i 00:19:59.963 05:02:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:59.963 05:02:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:59.963 05:02:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.221 05:02:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.480 [ 00:20:00.480 { 00:20:00.480 "name": "BaseBdev3", 00:20:00.480 "aliases": [ 00:20:00.480 "e7903375-59f1-4524-bae7-b010f6f274cb" 00:20:00.480 ], 00:20:00.480 "product_name": "Malloc disk", 00:20:00.480 "block_size": 512, 00:20:00.480 "num_blocks": 65536, 00:20:00.480 "uuid": "e7903375-59f1-4524-bae7-b010f6f274cb", 00:20:00.480 "assigned_rate_limits": { 00:20:00.480 "rw_ios_per_sec": 0, 00:20:00.480 "rw_mbytes_per_sec": 0, 00:20:00.480 "r_mbytes_per_sec": 0, 00:20:00.480 "w_mbytes_per_sec": 0 00:20:00.480 }, 00:20:00.480 "claimed": true, 00:20:00.480 "claim_type": "exclusive_write", 00:20:00.480 "zoned": false, 00:20:00.480 "supported_io_types": { 00:20:00.480 "read": true, 00:20:00.480 "write": true, 00:20:00.480 "unmap": true, 00:20:00.480 "write_zeroes": true, 00:20:00.480 "flush": true, 00:20:00.480 "reset": true, 00:20:00.480 "compare": false, 00:20:00.480 "compare_and_write": false, 00:20:00.480 "abort": true, 00:20:00.480 "nvme_admin": false, 00:20:00.480 "nvme_io": false 00:20:00.480 }, 00:20:00.480 "memory_domains": [ 00:20:00.480 { 00:20:00.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.480 "dma_device_type": 2 00:20:00.480 } 00:20:00.480 ], 00:20:00.480 "driver_specific": {} 00:20:00.480 } 00:20:00.480 ] 00:20:00.480 05:02:30 -- common/autotest_common.sh@895 -- # return 0 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.480 05:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.738 05:02:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.738 "name": "Existed_Raid", 00:20:00.738 "uuid": "9857dfce-4336-428b-a58a-69da1a2942fa", 00:20:00.738 "strip_size_kb": 64, 00:20:00.738 "state": "configuring", 00:20:00.738 "raid_level": "concat", 00:20:00.738 "superblock": true, 00:20:00.738 "num_base_bdevs": 4, 00:20:00.738 "num_base_bdevs_discovered": 3, 00:20:00.738 "num_base_bdevs_operational": 4, 00:20:00.738 "base_bdevs_list": [ 00:20:00.738 { 00:20:00.738 "name": "BaseBdev1", 00:20:00.738 "uuid": "dca6fec6-68e9-465a-a168-ce41f7d8d689", 00:20:00.738 "is_configured": true, 00:20:00.738 "data_offset": 2048, 00:20:00.738 "data_size": 63488 00:20:00.739 }, 00:20:00.739 { 00:20:00.739 "name": "BaseBdev2", 00:20:00.739 "uuid": "ab78c40d-3118-4092-93db-e5ec50010c2b", 00:20:00.739 "is_configured": true, 00:20:00.739 "data_offset": 2048, 00:20:00.739 "data_size": 63488 00:20:00.739 }, 00:20:00.739 { 00:20:00.739 "name": "BaseBdev3", 00:20:00.739 "uuid": "e7903375-59f1-4524-bae7-b010f6f274cb", 00:20:00.739 "is_configured": true, 00:20:00.739 "data_offset": 2048, 00:20:00.739 "data_size": 63488 00:20:00.739 }, 00:20:00.739 { 00:20:00.739 "name": "BaseBdev4", 00:20:00.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.739 "is_configured": false, 00:20:00.739 "data_offset": 0, 00:20:00.739 "data_size": 0 00:20:00.739 } 00:20:00.739 ] 00:20:00.739 }' 00:20:00.739 05:02:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.739 05:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:01.675 05:02:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:01.675 [2024-04-27 05:02:31.526653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:01.675 [2024-04-27 05:02:31.527289] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:01.675 [2024-04-27 05:02:31.527428] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:01.675 [2024-04-27 05:02:31.527650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:01.675 [2024-04-27 05:02:31.528121] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:01.675 [2024-04-27 05:02:31.528250] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:01.675 [2024-04-27 05:02:31.528585] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.675 BaseBdev4 00:20:01.675 05:02:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:01.675 05:02:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:01.675 05:02:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:01.675 05:02:31 -- common/autotest_common.sh@889 -- # local i 00:20:01.675 05:02:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:01.675 05:02:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:01.675 05:02:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.933 05:02:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:02.192 [ 00:20:02.192 { 00:20:02.192 "name": "BaseBdev4", 00:20:02.192 "aliases": [ 00:20:02.192 "627df0d9-3c43-4642-8ef3-af5f907d77b1" 00:20:02.192 ], 00:20:02.192 "product_name": "Malloc disk", 00:20:02.192 "block_size": 512, 00:20:02.192 "num_blocks": 65536, 00:20:02.192 "uuid": "627df0d9-3c43-4642-8ef3-af5f907d77b1", 00:20:02.192 "assigned_rate_limits": { 00:20:02.192 "rw_ios_per_sec": 0, 00:20:02.192 "rw_mbytes_per_sec": 0, 00:20:02.192 "r_mbytes_per_sec": 0, 00:20:02.192 "w_mbytes_per_sec": 0 00:20:02.192 }, 00:20:02.192 "claimed": true, 00:20:02.192 "claim_type": "exclusive_write", 00:20:02.192 "zoned": false, 00:20:02.192 "supported_io_types": { 00:20:02.192 "read": true, 00:20:02.192 "write": true, 00:20:02.192 "unmap": true, 00:20:02.192 "write_zeroes": true, 00:20:02.192 "flush": true, 00:20:02.192 "reset": true, 00:20:02.192 "compare": false, 00:20:02.192 "compare_and_write": false, 00:20:02.192 "abort": true, 00:20:02.192 "nvme_admin": false, 00:20:02.192 "nvme_io": false 00:20:02.192 }, 00:20:02.192 "memory_domains": [ 00:20:02.192 { 00:20:02.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.192 "dma_device_type": 2 00:20:02.192 } 00:20:02.192 ], 00:20:02.192 "driver_specific": {} 00:20:02.192 } 00:20:02.192 ] 00:20:02.192 05:02:32 -- common/autotest_common.sh@895 -- # return 0 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.192 05:02:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.450 05:02:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.450 "name": "Existed_Raid", 00:20:02.450 "uuid": "9857dfce-4336-428b-a58a-69da1a2942fa", 00:20:02.451 "strip_size_kb": 64, 00:20:02.451 "state": "online", 00:20:02.451 "raid_level": "concat", 00:20:02.451 "superblock": true, 00:20:02.451 "num_base_bdevs": 4, 00:20:02.451 "num_base_bdevs_discovered": 4, 00:20:02.451 "num_base_bdevs_operational": 4, 00:20:02.451 "base_bdevs_list": [ 00:20:02.451 { 00:20:02.451 "name": "BaseBdev1", 00:20:02.451 "uuid": "dca6fec6-68e9-465a-a168-ce41f7d8d689", 00:20:02.451 "is_configured": true, 00:20:02.451 "data_offset": 2048, 00:20:02.451 "data_size": 63488 00:20:02.451 }, 00:20:02.451 { 00:20:02.451 "name": "BaseBdev2", 00:20:02.451 "uuid": "ab78c40d-3118-4092-93db-e5ec50010c2b", 00:20:02.451 "is_configured": true, 00:20:02.451 "data_offset": 2048, 00:20:02.451 "data_size": 63488 00:20:02.451 }, 00:20:02.451 { 00:20:02.451 "name": "BaseBdev3", 00:20:02.451 "uuid": "e7903375-59f1-4524-bae7-b010f6f274cb", 00:20:02.451 "is_configured": true, 00:20:02.451 "data_offset": 2048, 00:20:02.451 "data_size": 63488 00:20:02.451 }, 00:20:02.451 { 00:20:02.451 "name": "BaseBdev4", 00:20:02.451 "uuid": "627df0d9-3c43-4642-8ef3-af5f907d77b1", 00:20:02.451 "is_configured": true, 00:20:02.451 "data_offset": 2048, 00:20:02.451 "data_size": 63488 00:20:02.451 } 00:20:02.451 ] 00:20:02.451 }' 00:20:02.451 05:02:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.451 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:20:03.384 05:02:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:03.384 [2024-04-27 05:02:33.179270] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.384 [2024-04-27 05:02:33.179620] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.384 [2024-04-27 05:02:33.179846] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.384 05:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.642 05:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.642 "name": "Existed_Raid", 00:20:03.642 "uuid": "9857dfce-4336-428b-a58a-69da1a2942fa", 00:20:03.642 "strip_size_kb": 64, 00:20:03.642 "state": "offline", 00:20:03.642 "raid_level": "concat", 00:20:03.642 "superblock": true, 00:20:03.642 "num_base_bdevs": 4, 00:20:03.642 "num_base_bdevs_discovered": 3, 00:20:03.642 "num_base_bdevs_operational": 3, 00:20:03.642 "base_bdevs_list": [ 00:20:03.642 { 00:20:03.642 "name": null, 00:20:03.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.642 "is_configured": false, 00:20:03.642 "data_offset": 2048, 00:20:03.642 "data_size": 63488 00:20:03.642 }, 00:20:03.642 { 00:20:03.642 "name": "BaseBdev2", 00:20:03.642 "uuid": "ab78c40d-3118-4092-93db-e5ec50010c2b", 00:20:03.642 "is_configured": true, 00:20:03.642 "data_offset": 2048, 00:20:03.642 "data_size": 63488 00:20:03.642 }, 00:20:03.642 { 00:20:03.642 "name": "BaseBdev3", 00:20:03.642 "uuid": "e7903375-59f1-4524-bae7-b010f6f274cb", 00:20:03.642 "is_configured": true, 00:20:03.642 "data_offset": 2048, 00:20:03.642 "data_size": 63488 00:20:03.642 }, 00:20:03.642 { 00:20:03.642 "name": "BaseBdev4", 00:20:03.642 "uuid": "627df0d9-3c43-4642-8ef3-af5f907d77b1", 00:20:03.642 "is_configured": true, 00:20:03.642 "data_offset": 2048, 00:20:03.642 "data_size": 63488 00:20:03.642 } 00:20:03.642 ] 00:20:03.642 }' 00:20:03.642 05:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.642 05:02:33 -- common/autotest_common.sh@10 -- # set +x 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:04.574 05:02:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:04.832 [2024-04-27 05:02:34.650483] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:04.832 05:02:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:04.832 05:02:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:04.832 05:02:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.832 05:02:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:05.099 05:02:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:05.099 05:02:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.099 05:02:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:05.371 [2024-04-27 05:02:35.189001] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:05.371 05:02:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:05.371 05:02:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:05.371 05:02:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:05.371 05:02:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.629 05:02:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:05.629 05:02:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.629 05:02:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:05.887 [2024-04-27 05:02:35.753130] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:05.887 [2024-04-27 05:02:35.753549] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:06.145 05:02:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:06.145 05:02:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:06.145 05:02:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.145 05:02:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.145 05:02:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:06.145 05:02:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:06.145 05:02:36 -- bdev/bdev_raid.sh@287 -- # killprocess 132361 00:20:06.145 05:02:36 -- common/autotest_common.sh@926 -- # '[' -z 132361 ']' 00:20:06.145 05:02:36 -- common/autotest_common.sh@930 -- # kill -0 132361 00:20:06.145 05:02:36 -- common/autotest_common.sh@931 -- # uname 00:20:06.145 05:02:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:06.145 05:02:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132361 00:20:06.403 05:02:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:06.403 05:02:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:06.403 05:02:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132361' 00:20:06.403 killing process with pid 132361 00:20:06.403 05:02:36 -- common/autotest_common.sh@945 -- # kill 132361 00:20:06.403 05:02:36 -- common/autotest_common.sh@950 -- # wait 132361 00:20:06.403 [2024-04-27 05:02:36.061399] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.403 [2024-04-27 05:02:36.061521] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:06.662 00:20:06.662 real 0m15.299s 00:20:06.662 user 0m27.976s 00:20:06.662 sys 0m2.045s 00:20:06.662 05:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.662 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.662 ************************************ 00:20:06.662 END TEST raid_state_function_test_sb 00:20:06.662 ************************************ 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:06.662 05:02:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:06.662 05:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:06.662 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.662 ************************************ 00:20:06.662 START TEST raid_superblock_test 00:20:06.662 ************************************ 00:20:06.662 05:02:36 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:06.662 05:02:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=132816 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:06.663 05:02:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 132816 /var/tmp/spdk-raid.sock 00:20:06.663 05:02:36 -- common/autotest_common.sh@819 -- # '[' -z 132816 ']' 00:20:06.663 05:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:06.663 05:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.663 05:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:06.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:06.663 05:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.663 05:02:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.663 [2024-04-27 05:02:36.559751] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:06.663 [2024-04-27 05:02:36.560201] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132816 ] 00:20:06.921 [2024-04-27 05:02:36.726783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.179 [2024-04-27 05:02:36.854712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.179 [2024-04-27 05:02:36.938852] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.745 05:02:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.745 05:02:37 -- common/autotest_common.sh@852 -- # return 0 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.745 05:02:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:08.003 malloc1 00:20:08.003 05:02:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.262 [2024-04-27 05:02:38.077341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.262 [2024-04-27 05:02:38.078297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.262 [2024-04-27 05:02:38.078611] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:08.262 [2024-04-27 05:02:38.078931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.262 [2024-04-27 05:02:38.082280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.262 [2024-04-27 05:02:38.082597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.262 pt1 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.262 05:02:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:08.520 malloc2 00:20:08.520 05:02:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.778 [2024-04-27 05:02:38.571095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.778 [2024-04-27 05:02:38.571815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.778 [2024-04-27 05:02:38.572122] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:08.778 [2024-04-27 05:02:38.572440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.778 [2024-04-27 05:02:38.575604] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.778 [2024-04-27 05:02:38.575900] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.778 pt2 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.778 05:02:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:09.035 malloc3 00:20:09.035 05:02:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:09.293 [2024-04-27 05:02:39.092206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:09.293 [2024-04-27 05:02:39.093143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.293 [2024-04-27 05:02:39.093466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:09.293 [2024-04-27 05:02:39.093768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.293 [2024-04-27 05:02:39.096909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.293 [2024-04-27 05:02:39.097248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:09.294 pt3 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:09.294 05:02:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:09.552 malloc4 00:20:09.552 05:02:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:09.810 [2024-04-27 05:02:39.625644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:09.810 [2024-04-27 05:02:39.626371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.810 [2024-04-27 05:02:39.626668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:09.810 [2024-04-27 05:02:39.626960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.810 [2024-04-27 05:02:39.630086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.810 [2024-04-27 05:02:39.630587] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:09.810 pt4 00:20:09.810 05:02:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:09.810 05:02:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:09.810 05:02:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:10.069 [2024-04-27 05:02:39.871301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.069 [2024-04-27 05:02:39.874149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.069 [2024-04-27 05:02:39.874396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:10.069 [2024-04-27 05:02:39.874540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:10.069 [2024-04-27 05:02:39.874885] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:10.069 [2024-04-27 05:02:39.875023] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:10.069 [2024-04-27 05:02:39.875260] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:10.069 [2024-04-27 05:02:39.875804] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:10.069 [2024-04-27 05:02:39.875934] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:10.069 [2024-04-27 05:02:39.876295] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.069 05:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.070 05:02:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.070 05:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.329 05:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.329 "name": "raid_bdev1", 00:20:10.329 "uuid": "429cdf92-bc32-49aa-8deb-45675dba241c", 00:20:10.329 "strip_size_kb": 64, 00:20:10.329 "state": "online", 00:20:10.329 "raid_level": "concat", 00:20:10.329 "superblock": true, 00:20:10.329 "num_base_bdevs": 4, 00:20:10.329 "num_base_bdevs_discovered": 4, 00:20:10.329 "num_base_bdevs_operational": 4, 00:20:10.329 "base_bdevs_list": [ 00:20:10.329 { 00:20:10.329 "name": "pt1", 00:20:10.329 "uuid": "2dfa028c-a51e-5e85-bef4-eeaa90f01915", 00:20:10.329 "is_configured": true, 00:20:10.329 "data_offset": 2048, 00:20:10.329 "data_size": 63488 00:20:10.329 }, 00:20:10.329 { 00:20:10.329 "name": "pt2", 00:20:10.329 "uuid": "7697b516-176c-517b-8350-cbd0667470bb", 00:20:10.329 "is_configured": true, 00:20:10.329 "data_offset": 2048, 00:20:10.329 "data_size": 63488 00:20:10.329 }, 00:20:10.329 { 00:20:10.329 "name": "pt3", 00:20:10.329 "uuid": "b2a03b60-9975-5b34-8fe0-ee1c1480fe43", 00:20:10.329 "is_configured": true, 00:20:10.329 "data_offset": 2048, 00:20:10.329 "data_size": 63488 00:20:10.329 }, 00:20:10.329 { 00:20:10.329 "name": "pt4", 00:20:10.329 "uuid": "34dd80f1-2a89-5845-9dfb-a59052089f23", 00:20:10.329 "is_configured": true, 00:20:10.329 "data_offset": 2048, 00:20:10.329 "data_size": 63488 00:20:10.329 } 00:20:10.329 ] 00:20:10.329 }' 00:20:10.329 05:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.329 05:02:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.895 05:02:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.895 05:02:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:11.154 [2024-04-27 05:02:40.992907] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.154 05:02:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=429cdf92-bc32-49aa-8deb-45675dba241c 00:20:11.154 05:02:41 -- bdev/bdev_raid.sh@380 -- # '[' -z 429cdf92-bc32-49aa-8deb-45675dba241c ']' 00:20:11.154 05:02:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:11.412 [2024-04-27 05:02:41.260662] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.412 [2024-04-27 05:02:41.261030] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.412 [2024-04-27 05:02:41.261299] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.412 [2024-04-27 05:02:41.261524] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.412 [2024-04-27 05:02:41.261648] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:11.412 05:02:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.412 05:02:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.977 05:02:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:12.234 05:02:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:12.234 05:02:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:12.492 05:02:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:12.492 05:02:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:12.750 05:02:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:12.750 05:02:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:13.008 05:02:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:13.008 05:02:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.008 05:02:42 -- common/autotest_common.sh@640 -- # local es=0 00:20:13.008 05:02:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.008 05:02:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.008 05:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.008 05:02:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.008 05:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.008 05:02:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.008 05:02:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:13.008 05:02:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.008 05:02:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:13.008 05:02:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.264 [2024-04-27 05:02:43.081574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:13.264 [2024-04-27 05:02:43.084347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:13.264 [2024-04-27 05:02:43.084536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:13.264 [2024-04-27 05:02:43.084686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:13.264 [2024-04-27 05:02:43.084818] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:13.264 [2024-04-27 05:02:43.085689] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:13.264 [2024-04-27 05:02:43.085991] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:13.264 [2024-04-27 05:02:43.086325] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:13.264 [2024-04-27 05:02:43.086615] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.264 [2024-04-27 05:02:43.086750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:20:13.264 request: 00:20:13.264 { 00:20:13.264 "name": "raid_bdev1", 00:20:13.264 "raid_level": "concat", 00:20:13.264 "base_bdevs": [ 00:20:13.264 "malloc1", 00:20:13.264 "malloc2", 00:20:13.264 "malloc3", 00:20:13.264 "malloc4" 00:20:13.264 ], 00:20:13.264 "superblock": false, 00:20:13.264 "strip_size_kb": 64, 00:20:13.264 "method": "bdev_raid_create", 00:20:13.264 "req_id": 1 00:20:13.264 } 00:20:13.264 Got JSON-RPC error response 00:20:13.264 response: 00:20:13.264 { 00:20:13.265 "code": -17, 00:20:13.265 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:13.265 } 00:20:13.265 05:02:43 -- common/autotest_common.sh@643 -- # es=1 00:20:13.265 05:02:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:13.265 05:02:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:13.265 05:02:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:13.265 05:02:43 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.265 05:02:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:13.522 05:02:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:13.522 05:02:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:13.522 05:02:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:13.780 [2024-04-27 05:02:43.591277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:13.780 [2024-04-27 05:02:43.592006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.780 [2024-04-27 05:02:43.592312] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:13.780 [2024-04-27 05:02:43.592614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.780 [2024-04-27 05:02:43.595770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.780 [2024-04-27 05:02:43.596109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:13.780 [2024-04-27 05:02:43.596497] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:13.780 [2024-04-27 05:02:43.596701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:13.780 pt1 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.780 05:02:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.039 05:02:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.039 "name": "raid_bdev1", 00:20:14.039 "uuid": "429cdf92-bc32-49aa-8deb-45675dba241c", 00:20:14.039 "strip_size_kb": 64, 00:20:14.039 "state": "configuring", 00:20:14.039 "raid_level": "concat", 00:20:14.039 "superblock": true, 00:20:14.039 "num_base_bdevs": 4, 00:20:14.039 "num_base_bdevs_discovered": 1, 00:20:14.039 "num_base_bdevs_operational": 4, 00:20:14.039 "base_bdevs_list": [ 00:20:14.039 { 00:20:14.039 "name": "pt1", 00:20:14.039 "uuid": "2dfa028c-a51e-5e85-bef4-eeaa90f01915", 00:20:14.039 "is_configured": true, 00:20:14.039 "data_offset": 2048, 00:20:14.039 "data_size": 63488 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": null, 00:20:14.039 "uuid": "7697b516-176c-517b-8350-cbd0667470bb", 00:20:14.039 "is_configured": false, 00:20:14.039 "data_offset": 2048, 00:20:14.039 "data_size": 63488 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": null, 00:20:14.039 "uuid": "b2a03b60-9975-5b34-8fe0-ee1c1480fe43", 00:20:14.039 "is_configured": false, 00:20:14.039 "data_offset": 2048, 00:20:14.039 "data_size": 63488 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": null, 00:20:14.039 "uuid": "34dd80f1-2a89-5845-9dfb-a59052089f23", 00:20:14.039 "is_configured": false, 00:20:14.039 "data_offset": 2048, 00:20:14.039 "data_size": 63488 00:20:14.039 } 00:20:14.039 ] 00:20:14.039 }' 00:20:14.039 05:02:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.039 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:20:14.605 05:02:44 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:14.605 05:02:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:14.862 [2024-04-27 05:02:44.733512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:14.862 [2024-04-27 05:02:44.734428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.862 [2024-04-27 05:02:44.734743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:14.862 [2024-04-27 05:02:44.735100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.862 [2024-04-27 05:02:44.735915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.862 [2024-04-27 05:02:44.736202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:14.862 [2024-04-27 05:02:44.736583] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:14.862 [2024-04-27 05:02:44.736735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.862 pt2 00:20:14.862 05:02:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:15.120 [2024-04-27 05:02:44.981599] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.120 05:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.120 05:02:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.120 05:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.378 05:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.378 "name": "raid_bdev1", 00:20:15.378 "uuid": "429cdf92-bc32-49aa-8deb-45675dba241c", 00:20:15.378 "strip_size_kb": 64, 00:20:15.378 "state": "configuring", 00:20:15.378 "raid_level": "concat", 00:20:15.378 "superblock": true, 00:20:15.378 "num_base_bdevs": 4, 00:20:15.378 "num_base_bdevs_discovered": 1, 00:20:15.378 "num_base_bdevs_operational": 4, 00:20:15.378 "base_bdevs_list": [ 00:20:15.378 { 00:20:15.378 "name": "pt1", 00:20:15.378 "uuid": "2dfa028c-a51e-5e85-bef4-eeaa90f01915", 00:20:15.378 "is_configured": true, 00:20:15.378 "data_offset": 2048, 00:20:15.378 "data_size": 63488 00:20:15.378 }, 00:20:15.378 { 00:20:15.378 "name": null, 00:20:15.378 "uuid": "7697b516-176c-517b-8350-cbd0667470bb", 00:20:15.378 "is_configured": false, 00:20:15.378 "data_offset": 2048, 00:20:15.378 "data_size": 63488 00:20:15.378 }, 00:20:15.378 { 00:20:15.378 "name": null, 00:20:15.378 "uuid": "b2a03b60-9975-5b34-8fe0-ee1c1480fe43", 00:20:15.378 "is_configured": false, 00:20:15.378 "data_offset": 2048, 00:20:15.378 "data_size": 63488 00:20:15.378 }, 00:20:15.378 { 00:20:15.378 "name": null, 00:20:15.378 "uuid": "34dd80f1-2a89-5845-9dfb-a59052089f23", 00:20:15.378 "is_configured": false, 00:20:15.378 "data_offset": 2048, 00:20:15.378 "data_size": 63488 00:20:15.378 } 00:20:15.378 ] 00:20:15.378 }' 00:20:15.378 05:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.378 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 05:02:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:16.312 05:02:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.312 05:02:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.312 [2024-04-27 05:02:46.169220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.312 [2024-04-27 05:02:46.169617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.312 [2024-04-27 05:02:46.169725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:16.313 [2024-04-27 05:02:46.169793] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.313 [2024-04-27 05:02:46.170388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.313 [2024-04-27 05:02:46.170577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.313 [2024-04-27 05:02:46.170808] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:16.313 [2024-04-27 05:02:46.170955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.313 pt2 00:20:16.313 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:16.313 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.313 05:02:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:16.571 [2024-04-27 05:02:46.437306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:16.571 [2024-04-27 05:02:46.437748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.571 [2024-04-27 05:02:46.437839] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:16.571 [2024-04-27 05:02:46.438117] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.571 [2024-04-27 05:02:46.438748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.571 [2024-04-27 05:02:46.438941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:16.571 [2024-04-27 05:02:46.439164] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:16.571 [2024-04-27 05:02:46.439299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:16.571 pt3 00:20:16.571 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:16.571 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.571 05:02:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:16.829 [2024-04-27 05:02:46.677342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:16.829 [2024-04-27 05:02:46.677766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.829 [2024-04-27 05:02:46.677943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:16.829 [2024-04-27 05:02:46.678108] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.829 [2024-04-27 05:02:46.678759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.829 [2024-04-27 05:02:46.678945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:16.829 [2024-04-27 05:02:46.679179] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:16.829 [2024-04-27 05:02:46.679323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:16.829 [2024-04-27 05:02:46.679606] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:20:16.829 [2024-04-27 05:02:46.679729] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:16.829 [2024-04-27 05:02:46.679865] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:16.829 [2024-04-27 05:02:46.680265] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:20:16.829 [2024-04-27 05:02:46.680390] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:20:16.829 [2024-04-27 05:02:46.680668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.829 pt4 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.829 05:02:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.087 05:02:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.087 "name": "raid_bdev1", 00:20:17.087 "uuid": "429cdf92-bc32-49aa-8deb-45675dba241c", 00:20:17.087 "strip_size_kb": 64, 00:20:17.087 "state": "online", 00:20:17.087 "raid_level": "concat", 00:20:17.087 "superblock": true, 00:20:17.087 "num_base_bdevs": 4, 00:20:17.087 "num_base_bdevs_discovered": 4, 00:20:17.087 "num_base_bdevs_operational": 4, 00:20:17.087 "base_bdevs_list": [ 00:20:17.087 { 00:20:17.087 "name": "pt1", 00:20:17.087 "uuid": "2dfa028c-a51e-5e85-bef4-eeaa90f01915", 00:20:17.087 "is_configured": true, 00:20:17.087 "data_offset": 2048, 00:20:17.087 "data_size": 63488 00:20:17.087 }, 00:20:17.087 { 00:20:17.087 "name": "pt2", 00:20:17.087 "uuid": "7697b516-176c-517b-8350-cbd0667470bb", 00:20:17.087 "is_configured": true, 00:20:17.087 "data_offset": 2048, 00:20:17.087 "data_size": 63488 00:20:17.087 }, 00:20:17.087 { 00:20:17.087 "name": "pt3", 00:20:17.087 "uuid": "b2a03b60-9975-5b34-8fe0-ee1c1480fe43", 00:20:17.087 "is_configured": true, 00:20:17.087 "data_offset": 2048, 00:20:17.087 "data_size": 63488 00:20:17.087 }, 00:20:17.087 { 00:20:17.087 "name": "pt4", 00:20:17.087 "uuid": "34dd80f1-2a89-5845-9dfb-a59052089f23", 00:20:17.087 "is_configured": true, 00:20:17.087 "data_offset": 2048, 00:20:17.087 "data_size": 63488 00:20:17.087 } 00:20:17.087 ] 00:20:17.087 }' 00:20:17.087 05:02:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.087 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:18.022 [2024-04-27 05:02:47.821406] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@430 -- # '[' 429cdf92-bc32-49aa-8deb-45675dba241c '!=' 429cdf92-bc32-49aa-8deb-45675dba241c ']' 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:18.022 05:02:47 -- bdev/bdev_raid.sh@511 -- # killprocess 132816 00:20:18.022 05:02:47 -- common/autotest_common.sh@926 -- # '[' -z 132816 ']' 00:20:18.022 05:02:47 -- common/autotest_common.sh@930 -- # kill -0 132816 00:20:18.022 05:02:47 -- common/autotest_common.sh@931 -- # uname 00:20:18.022 05:02:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.022 05:02:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132816 00:20:18.022 05:02:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:18.022 05:02:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:18.022 05:02:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132816' 00:20:18.022 killing process with pid 132816 00:20:18.022 05:02:47 -- common/autotest_common.sh@945 -- # kill 132816 00:20:18.022 05:02:47 -- common/autotest_common.sh@950 -- # wait 132816 00:20:18.022 [2024-04-27 05:02:47.873687] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.022 [2024-04-27 05:02:47.873804] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.022 [2024-04-27 05:02:47.873890] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.022 [2024-04-27 05:02:47.873904] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:20:18.022 [2024-04-27 05:02:47.924372] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:18.588 00:20:18.588 real 0m11.689s 00:20:18.588 user 0m20.990s 00:20:18.588 sys 0m1.739s 00:20:18.588 05:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.588 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.588 ************************************ 00:20:18.588 END TEST raid_superblock_test 00:20:18.588 ************************************ 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:18.588 05:02:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:18.588 05:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:18.588 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.588 ************************************ 00:20:18.588 START TEST raid_state_function_test 00:20:18.588 ************************************ 00:20:18.588 05:02:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=133144 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133144' 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:18.588 Process raid pid: 133144 00:20:18.588 05:02:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133144 /var/tmp/spdk-raid.sock 00:20:18.588 05:02:48 -- common/autotest_common.sh@819 -- # '[' -z 133144 ']' 00:20:18.588 05:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:18.588 05:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.588 05:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:18.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:18.588 05:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.588 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.588 [2024-04-27 05:02:48.323116] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:18.588 [2024-04-27 05:02:48.324338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.847 [2024-04-27 05:02:48.496627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.847 [2024-04-27 05:02:48.631385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.847 [2024-04-27 05:02:48.720967] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:19.781 05:02:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.781 05:02:49 -- common/autotest_common.sh@852 -- # return 0 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:19.781 [2024-04-27 05:02:49.575621] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:19.781 [2024-04-27 05:02:49.576573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:19.781 [2024-04-27 05:02:49.576719] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:19.781 [2024-04-27 05:02:49.576911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:19.781 [2024-04-27 05:02:49.577105] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:19.781 [2024-04-27 05:02:49.577342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:19.781 [2024-04-27 05:02:49.577463] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:19.781 [2024-04-27 05:02:49.577653] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.781 05:02:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.039 05:02:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.039 "name": "Existed_Raid", 00:20:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.039 "strip_size_kb": 0, 00:20:20.039 "state": "configuring", 00:20:20.039 "raid_level": "raid1", 00:20:20.039 "superblock": false, 00:20:20.039 "num_base_bdevs": 4, 00:20:20.039 "num_base_bdevs_discovered": 0, 00:20:20.039 "num_base_bdevs_operational": 4, 00:20:20.039 "base_bdevs_list": [ 00:20:20.039 { 00:20:20.039 "name": "BaseBdev1", 00:20:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.039 "is_configured": false, 00:20:20.039 "data_offset": 0, 00:20:20.039 "data_size": 0 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "name": "BaseBdev2", 00:20:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.039 "is_configured": false, 00:20:20.039 "data_offset": 0, 00:20:20.039 "data_size": 0 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "name": "BaseBdev3", 00:20:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.039 "is_configured": false, 00:20:20.039 "data_offset": 0, 00:20:20.039 "data_size": 0 00:20:20.039 }, 00:20:20.039 { 00:20:20.039 "name": "BaseBdev4", 00:20:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.039 "is_configured": false, 00:20:20.039 "data_offset": 0, 00:20:20.039 "data_size": 0 00:20:20.039 } 00:20:20.039 ] 00:20:20.039 }' 00:20:20.039 05:02:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.039 05:02:49 -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 05:02:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:20.972 [2024-04-27 05:02:50.739731] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:20.972 [2024-04-27 05:02:50.740082] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:20.972 05:02:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:21.229 [2024-04-27 05:02:51.027828] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.229 [2024-04-27 05:02:51.028760] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.229 [2024-04-27 05:02:51.029293] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:21.229 [2024-04-27 05:02:51.029822] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:21.229 [2024-04-27 05:02:51.030233] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:21.229 [2024-04-27 05:02:51.030702] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:21.229 [2024-04-27 05:02:51.031142] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:21.229 [2024-04-27 05:02:51.031574] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:21.229 05:02:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:21.487 [2024-04-27 05:02:51.284335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.487 BaseBdev1 00:20:21.487 05:02:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:21.487 05:02:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:21.487 05:02:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:21.487 05:02:51 -- common/autotest_common.sh@889 -- # local i 00:20:21.487 05:02:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:21.487 05:02:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:21.487 05:02:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:21.745 05:02:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:22.003 [ 00:20:22.003 { 00:20:22.003 "name": "BaseBdev1", 00:20:22.003 "aliases": [ 00:20:22.003 "d9104262-fb25-4fa8-b196-9d2b049a5c3e" 00:20:22.003 ], 00:20:22.003 "product_name": "Malloc disk", 00:20:22.003 "block_size": 512, 00:20:22.003 "num_blocks": 65536, 00:20:22.003 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:22.003 "assigned_rate_limits": { 00:20:22.003 "rw_ios_per_sec": 0, 00:20:22.003 "rw_mbytes_per_sec": 0, 00:20:22.003 "r_mbytes_per_sec": 0, 00:20:22.003 "w_mbytes_per_sec": 0 00:20:22.003 }, 00:20:22.003 "claimed": true, 00:20:22.003 "claim_type": "exclusive_write", 00:20:22.003 "zoned": false, 00:20:22.003 "supported_io_types": { 00:20:22.003 "read": true, 00:20:22.003 "write": true, 00:20:22.003 "unmap": true, 00:20:22.003 "write_zeroes": true, 00:20:22.003 "flush": true, 00:20:22.003 "reset": true, 00:20:22.003 "compare": false, 00:20:22.003 "compare_and_write": false, 00:20:22.003 "abort": true, 00:20:22.003 "nvme_admin": false, 00:20:22.003 "nvme_io": false 00:20:22.003 }, 00:20:22.003 "memory_domains": [ 00:20:22.003 { 00:20:22.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.003 "dma_device_type": 2 00:20:22.003 } 00:20:22.003 ], 00:20:22.003 "driver_specific": {} 00:20:22.003 } 00:20:22.003 ] 00:20:22.003 05:02:51 -- common/autotest_common.sh@895 -- # return 0 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.003 05:02:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.261 05:02:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.261 "name": "Existed_Raid", 00:20:22.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.261 "strip_size_kb": 0, 00:20:22.261 "state": "configuring", 00:20:22.261 "raid_level": "raid1", 00:20:22.261 "superblock": false, 00:20:22.261 "num_base_bdevs": 4, 00:20:22.261 "num_base_bdevs_discovered": 1, 00:20:22.261 "num_base_bdevs_operational": 4, 00:20:22.261 "base_bdevs_list": [ 00:20:22.261 { 00:20:22.261 "name": "BaseBdev1", 00:20:22.261 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:22.261 "is_configured": true, 00:20:22.261 "data_offset": 0, 00:20:22.261 "data_size": 65536 00:20:22.261 }, 00:20:22.261 { 00:20:22.261 "name": "BaseBdev2", 00:20:22.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.261 "is_configured": false, 00:20:22.261 "data_offset": 0, 00:20:22.261 "data_size": 0 00:20:22.261 }, 00:20:22.261 { 00:20:22.261 "name": "BaseBdev3", 00:20:22.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.261 "is_configured": false, 00:20:22.261 "data_offset": 0, 00:20:22.261 "data_size": 0 00:20:22.261 }, 00:20:22.261 { 00:20:22.261 "name": "BaseBdev4", 00:20:22.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.261 "is_configured": false, 00:20:22.261 "data_offset": 0, 00:20:22.261 "data_size": 0 00:20:22.261 } 00:20:22.261 ] 00:20:22.261 }' 00:20:22.261 05:02:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.261 05:02:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.826 05:02:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:23.084 [2024-04-27 05:02:52.909162] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.084 [2024-04-27 05:02:52.909811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:23.084 05:02:52 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:23.084 05:02:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:23.341 [2024-04-27 05:02:53.177312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.341 [2024-04-27 05:02:53.180273] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.341 [2024-04-27 05:02:53.180667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.341 [2024-04-27 05:02:53.180890] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:23.341 [2024-04-27 05:02:53.181198] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:23.341 [2024-04-27 05:02:53.181425] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:23.341 [2024-04-27 05:02:53.181670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.341 05:02:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.606 05:02:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.606 "name": "Existed_Raid", 00:20:23.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.606 "strip_size_kb": 0, 00:20:23.606 "state": "configuring", 00:20:23.606 "raid_level": "raid1", 00:20:23.606 "superblock": false, 00:20:23.606 "num_base_bdevs": 4, 00:20:23.606 "num_base_bdevs_discovered": 1, 00:20:23.606 "num_base_bdevs_operational": 4, 00:20:23.606 "base_bdevs_list": [ 00:20:23.606 { 00:20:23.606 "name": "BaseBdev1", 00:20:23.606 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:23.606 "is_configured": true, 00:20:23.606 "data_offset": 0, 00:20:23.606 "data_size": 65536 00:20:23.606 }, 00:20:23.606 { 00:20:23.606 "name": "BaseBdev2", 00:20:23.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.606 "is_configured": false, 00:20:23.606 "data_offset": 0, 00:20:23.606 "data_size": 0 00:20:23.606 }, 00:20:23.606 { 00:20:23.606 "name": "BaseBdev3", 00:20:23.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.606 "is_configured": false, 00:20:23.606 "data_offset": 0, 00:20:23.606 "data_size": 0 00:20:23.607 }, 00:20:23.607 { 00:20:23.607 "name": "BaseBdev4", 00:20:23.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.607 "is_configured": false, 00:20:23.607 "data_offset": 0, 00:20:23.607 "data_size": 0 00:20:23.607 } 00:20:23.607 ] 00:20:23.607 }' 00:20:23.607 05:02:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.607 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:20:24.555 05:02:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:24.555 [2024-04-27 05:02:54.402195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.555 BaseBdev2 00:20:24.555 05:02:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:24.555 05:02:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:24.555 05:02:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:24.555 05:02:54 -- common/autotest_common.sh@889 -- # local i 00:20:24.555 05:02:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:24.555 05:02:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:24.555 05:02:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.838 05:02:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:25.096 [ 00:20:25.096 { 00:20:25.096 "name": "BaseBdev2", 00:20:25.096 "aliases": [ 00:20:25.096 "0729da7a-c357-4ecf-854a-ad6b9b067d61" 00:20:25.096 ], 00:20:25.096 "product_name": "Malloc disk", 00:20:25.096 "block_size": 512, 00:20:25.096 "num_blocks": 65536, 00:20:25.096 "uuid": "0729da7a-c357-4ecf-854a-ad6b9b067d61", 00:20:25.096 "assigned_rate_limits": { 00:20:25.096 "rw_ios_per_sec": 0, 00:20:25.096 "rw_mbytes_per_sec": 0, 00:20:25.096 "r_mbytes_per_sec": 0, 00:20:25.096 "w_mbytes_per_sec": 0 00:20:25.096 }, 00:20:25.096 "claimed": true, 00:20:25.096 "claim_type": "exclusive_write", 00:20:25.096 "zoned": false, 00:20:25.096 "supported_io_types": { 00:20:25.096 "read": true, 00:20:25.096 "write": true, 00:20:25.096 "unmap": true, 00:20:25.096 "write_zeroes": true, 00:20:25.096 "flush": true, 00:20:25.096 "reset": true, 00:20:25.096 "compare": false, 00:20:25.096 "compare_and_write": false, 00:20:25.096 "abort": true, 00:20:25.096 "nvme_admin": false, 00:20:25.096 "nvme_io": false 00:20:25.096 }, 00:20:25.096 "memory_domains": [ 00:20:25.096 { 00:20:25.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.096 "dma_device_type": 2 00:20:25.096 } 00:20:25.096 ], 00:20:25.096 "driver_specific": {} 00:20:25.096 } 00:20:25.096 ] 00:20:25.096 05:02:54 -- common/autotest_common.sh@895 -- # return 0 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.096 05:02:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.356 05:02:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.356 "name": "Existed_Raid", 00:20:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.356 "strip_size_kb": 0, 00:20:25.356 "state": "configuring", 00:20:25.356 "raid_level": "raid1", 00:20:25.356 "superblock": false, 00:20:25.356 "num_base_bdevs": 4, 00:20:25.356 "num_base_bdevs_discovered": 2, 00:20:25.356 "num_base_bdevs_operational": 4, 00:20:25.356 "base_bdevs_list": [ 00:20:25.356 { 00:20:25.356 "name": "BaseBdev1", 00:20:25.356 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:25.356 "is_configured": true, 00:20:25.356 "data_offset": 0, 00:20:25.356 "data_size": 65536 00:20:25.356 }, 00:20:25.356 { 00:20:25.356 "name": "BaseBdev2", 00:20:25.356 "uuid": "0729da7a-c357-4ecf-854a-ad6b9b067d61", 00:20:25.356 "is_configured": true, 00:20:25.356 "data_offset": 0, 00:20:25.356 "data_size": 65536 00:20:25.356 }, 00:20:25.356 { 00:20:25.356 "name": "BaseBdev3", 00:20:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.356 "is_configured": false, 00:20:25.356 "data_offset": 0, 00:20:25.356 "data_size": 0 00:20:25.356 }, 00:20:25.356 { 00:20:25.356 "name": "BaseBdev4", 00:20:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.356 "is_configured": false, 00:20:25.356 "data_offset": 0, 00:20:25.356 "data_size": 0 00:20:25.356 } 00:20:25.356 ] 00:20:25.356 }' 00:20:25.356 05:02:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.356 05:02:55 -- common/autotest_common.sh@10 -- # set +x 00:20:26.291 05:02:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:26.291 [2024-04-27 05:02:56.094500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:26.291 BaseBdev3 00:20:26.291 05:02:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:26.291 05:02:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:26.291 05:02:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:26.291 05:02:56 -- common/autotest_common.sh@889 -- # local i 00:20:26.291 05:02:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:26.291 05:02:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:26.291 05:02:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.551 05:02:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:26.810 [ 00:20:26.810 { 00:20:26.810 "name": "BaseBdev3", 00:20:26.810 "aliases": [ 00:20:26.810 "cd86fba9-a4f9-4845-9d59-28995f252468" 00:20:26.810 ], 00:20:26.810 "product_name": "Malloc disk", 00:20:26.810 "block_size": 512, 00:20:26.810 "num_blocks": 65536, 00:20:26.810 "uuid": "cd86fba9-a4f9-4845-9d59-28995f252468", 00:20:26.810 "assigned_rate_limits": { 00:20:26.810 "rw_ios_per_sec": 0, 00:20:26.810 "rw_mbytes_per_sec": 0, 00:20:26.810 "r_mbytes_per_sec": 0, 00:20:26.810 "w_mbytes_per_sec": 0 00:20:26.810 }, 00:20:26.810 "claimed": true, 00:20:26.810 "claim_type": "exclusive_write", 00:20:26.810 "zoned": false, 00:20:26.810 "supported_io_types": { 00:20:26.810 "read": true, 00:20:26.810 "write": true, 00:20:26.810 "unmap": true, 00:20:26.810 "write_zeroes": true, 00:20:26.810 "flush": true, 00:20:26.810 "reset": true, 00:20:26.810 "compare": false, 00:20:26.810 "compare_and_write": false, 00:20:26.810 "abort": true, 00:20:26.810 "nvme_admin": false, 00:20:26.810 "nvme_io": false 00:20:26.810 }, 00:20:26.810 "memory_domains": [ 00:20:26.810 { 00:20:26.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.810 "dma_device_type": 2 00:20:26.810 } 00:20:26.810 ], 00:20:26.810 "driver_specific": {} 00:20:26.810 } 00:20:26.810 ] 00:20:26.810 05:02:56 -- common/autotest_common.sh@895 -- # return 0 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.810 05:02:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.070 05:02:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.070 "name": "Existed_Raid", 00:20:27.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.070 "strip_size_kb": 0, 00:20:27.070 "state": "configuring", 00:20:27.070 "raid_level": "raid1", 00:20:27.070 "superblock": false, 00:20:27.070 "num_base_bdevs": 4, 00:20:27.070 "num_base_bdevs_discovered": 3, 00:20:27.070 "num_base_bdevs_operational": 4, 00:20:27.070 "base_bdevs_list": [ 00:20:27.070 { 00:20:27.070 "name": "BaseBdev1", 00:20:27.070 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:27.070 "is_configured": true, 00:20:27.070 "data_offset": 0, 00:20:27.070 "data_size": 65536 00:20:27.070 }, 00:20:27.070 { 00:20:27.070 "name": "BaseBdev2", 00:20:27.070 "uuid": "0729da7a-c357-4ecf-854a-ad6b9b067d61", 00:20:27.070 "is_configured": true, 00:20:27.070 "data_offset": 0, 00:20:27.070 "data_size": 65536 00:20:27.070 }, 00:20:27.070 { 00:20:27.070 "name": "BaseBdev3", 00:20:27.070 "uuid": "cd86fba9-a4f9-4845-9d59-28995f252468", 00:20:27.070 "is_configured": true, 00:20:27.070 "data_offset": 0, 00:20:27.070 "data_size": 65536 00:20:27.070 }, 00:20:27.070 { 00:20:27.070 "name": "BaseBdev4", 00:20:27.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.070 "is_configured": false, 00:20:27.070 "data_offset": 0, 00:20:27.070 "data_size": 0 00:20:27.070 } 00:20:27.070 ] 00:20:27.070 }' 00:20:27.070 05:02:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.070 05:02:56 -- common/autotest_common.sh@10 -- # set +x 00:20:28.005 05:02:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:28.005 [2024-04-27 05:02:57.876136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:28.005 [2024-04-27 05:02:57.876518] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:20:28.005 [2024-04-27 05:02:57.876607] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:28.005 [2024-04-27 05:02:57.876923] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:28.005 [2024-04-27 05:02:57.877534] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:20:28.005 [2024-04-27 05:02:57.877678] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:20:28.005 [2024-04-27 05:02:57.878097] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.005 BaseBdev4 00:20:28.005 05:02:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:28.005 05:02:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:28.005 05:02:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:28.005 05:02:57 -- common/autotest_common.sh@889 -- # local i 00:20:28.005 05:02:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:28.005 05:02:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:28.005 05:02:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.263 05:02:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:28.522 [ 00:20:28.522 { 00:20:28.522 "name": "BaseBdev4", 00:20:28.522 "aliases": [ 00:20:28.522 "0fde7c71-b4c0-4333-a619-fea0e8388f2f" 00:20:28.522 ], 00:20:28.522 "product_name": "Malloc disk", 00:20:28.522 "block_size": 512, 00:20:28.522 "num_blocks": 65536, 00:20:28.522 "uuid": "0fde7c71-b4c0-4333-a619-fea0e8388f2f", 00:20:28.522 "assigned_rate_limits": { 00:20:28.522 "rw_ios_per_sec": 0, 00:20:28.522 "rw_mbytes_per_sec": 0, 00:20:28.522 "r_mbytes_per_sec": 0, 00:20:28.522 "w_mbytes_per_sec": 0 00:20:28.522 }, 00:20:28.522 "claimed": true, 00:20:28.522 "claim_type": "exclusive_write", 00:20:28.522 "zoned": false, 00:20:28.522 "supported_io_types": { 00:20:28.522 "read": true, 00:20:28.522 "write": true, 00:20:28.522 "unmap": true, 00:20:28.522 "write_zeroes": true, 00:20:28.522 "flush": true, 00:20:28.522 "reset": true, 00:20:28.522 "compare": false, 00:20:28.522 "compare_and_write": false, 00:20:28.522 "abort": true, 00:20:28.522 "nvme_admin": false, 00:20:28.522 "nvme_io": false 00:20:28.522 }, 00:20:28.522 "memory_domains": [ 00:20:28.522 { 00:20:28.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.522 "dma_device_type": 2 00:20:28.522 } 00:20:28.522 ], 00:20:28.522 "driver_specific": {} 00:20:28.522 } 00:20:28.522 ] 00:20:28.522 05:02:58 -- common/autotest_common.sh@895 -- # return 0 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.522 05:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.781 05:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.781 "name": "Existed_Raid", 00:20:28.781 "uuid": "bce0ac61-d701-42e6-9db6-47ab80178c51", 00:20:28.781 "strip_size_kb": 0, 00:20:28.781 "state": "online", 00:20:28.781 "raid_level": "raid1", 00:20:28.781 "superblock": false, 00:20:28.781 "num_base_bdevs": 4, 00:20:28.781 "num_base_bdevs_discovered": 4, 00:20:28.781 "num_base_bdevs_operational": 4, 00:20:28.781 "base_bdevs_list": [ 00:20:28.781 { 00:20:28.781 "name": "BaseBdev1", 00:20:28.781 "uuid": "d9104262-fb25-4fa8-b196-9d2b049a5c3e", 00:20:28.781 "is_configured": true, 00:20:28.781 "data_offset": 0, 00:20:28.781 "data_size": 65536 00:20:28.781 }, 00:20:28.781 { 00:20:28.781 "name": "BaseBdev2", 00:20:28.781 "uuid": "0729da7a-c357-4ecf-854a-ad6b9b067d61", 00:20:28.781 "is_configured": true, 00:20:28.781 "data_offset": 0, 00:20:28.781 "data_size": 65536 00:20:28.781 }, 00:20:28.781 { 00:20:28.781 "name": "BaseBdev3", 00:20:28.781 "uuid": "cd86fba9-a4f9-4845-9d59-28995f252468", 00:20:28.781 "is_configured": true, 00:20:28.781 "data_offset": 0, 00:20:28.781 "data_size": 65536 00:20:28.781 }, 00:20:28.781 { 00:20:28.781 "name": "BaseBdev4", 00:20:28.781 "uuid": "0fde7c71-b4c0-4333-a619-fea0e8388f2f", 00:20:28.781 "is_configured": true, 00:20:28.781 "data_offset": 0, 00:20:28.781 "data_size": 65536 00:20:28.781 } 00:20:28.781 ] 00:20:28.781 }' 00:20:28.781 05:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.781 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:29.716 [2024-04-27 05:02:59.500857] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.716 05:02:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.974 05:02:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.974 "name": "Existed_Raid", 00:20:29.974 "uuid": "bce0ac61-d701-42e6-9db6-47ab80178c51", 00:20:29.974 "strip_size_kb": 0, 00:20:29.974 "state": "online", 00:20:29.974 "raid_level": "raid1", 00:20:29.974 "superblock": false, 00:20:29.974 "num_base_bdevs": 4, 00:20:29.974 "num_base_bdevs_discovered": 3, 00:20:29.974 "num_base_bdevs_operational": 3, 00:20:29.974 "base_bdevs_list": [ 00:20:29.974 { 00:20:29.974 "name": null, 00:20:29.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.975 "is_configured": false, 00:20:29.975 "data_offset": 0, 00:20:29.975 "data_size": 65536 00:20:29.975 }, 00:20:29.975 { 00:20:29.975 "name": "BaseBdev2", 00:20:29.975 "uuid": "0729da7a-c357-4ecf-854a-ad6b9b067d61", 00:20:29.975 "is_configured": true, 00:20:29.975 "data_offset": 0, 00:20:29.975 "data_size": 65536 00:20:29.975 }, 00:20:29.975 { 00:20:29.975 "name": "BaseBdev3", 00:20:29.975 "uuid": "cd86fba9-a4f9-4845-9d59-28995f252468", 00:20:29.975 "is_configured": true, 00:20:29.975 "data_offset": 0, 00:20:29.975 "data_size": 65536 00:20:29.975 }, 00:20:29.975 { 00:20:29.975 "name": "BaseBdev4", 00:20:29.975 "uuid": "0fde7c71-b4c0-4333-a619-fea0e8388f2f", 00:20:29.975 "is_configured": true, 00:20:29.975 "data_offset": 0, 00:20:29.975 "data_size": 65536 00:20:29.975 } 00:20:29.975 ] 00:20:29.975 }' 00:20:29.975 05:02:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.975 05:02:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:30.907 05:03:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:31.165 [2024-04-27 05:03:00.994073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.165 05:03:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.165 05:03:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.165 05:03:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.165 05:03:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.422 05:03:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.422 05:03:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.422 05:03:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:31.679 [2024-04-27 05:03:01.528748] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:31.679 05:03:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.679 05:03:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.679 05:03:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.679 05:03:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.937 05:03:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.937 05:03:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.937 05:03:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:32.194 [2024-04-27 05:03:02.083234] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:32.194 [2024-04-27 05:03:02.083733] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.194 [2024-04-27 05:03:02.084053] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.465 [2024-04-27 05:03:02.105518] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.465 [2024-04-27 05:03:02.106012] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:20:32.465 05:03:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:32.465 05:03:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:32.465 05:03:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.465 05:03:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:32.726 05:03:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:32.726 05:03:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:32.726 05:03:02 -- bdev/bdev_raid.sh@287 -- # killprocess 133144 00:20:32.726 05:03:02 -- common/autotest_common.sh@926 -- # '[' -z 133144 ']' 00:20:32.726 05:03:02 -- common/autotest_common.sh@930 -- # kill -0 133144 00:20:32.726 05:03:02 -- common/autotest_common.sh@931 -- # uname 00:20:32.726 05:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:32.726 05:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133144 00:20:32.726 killing process with pid 133144 00:20:32.726 05:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:32.726 05:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:32.726 05:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133144' 00:20:32.726 05:03:02 -- common/autotest_common.sh@945 -- # kill 133144 00:20:32.726 05:03:02 -- common/autotest_common.sh@950 -- # wait 133144 00:20:32.726 [2024-04-27 05:03:02.453577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.726 [2024-04-27 05:03:02.453695] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.984 ************************************ 00:20:32.984 END TEST raid_state_function_test 00:20:32.984 ************************************ 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:32.984 00:20:32.984 real 0m14.547s 00:20:32.984 user 0m26.541s 00:20:32.984 sys 0m2.065s 00:20:32.984 05:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.984 05:03:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:32.984 05:03:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:32.984 05:03:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:32.984 05:03:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.984 ************************************ 00:20:32.984 START TEST raid_state_function_test_sb 00:20:32.984 ************************************ 00:20:32.984 05:03:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=133589 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:32.984 Process raid pid: 133589 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133589' 00:20:32.984 05:03:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133589 /var/tmp/spdk-raid.sock 00:20:32.984 05:03:02 -- common/autotest_common.sh@819 -- # '[' -z 133589 ']' 00:20:32.984 05:03:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.984 05:03:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.984 05:03:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.984 05:03:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.984 05:03:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.242 [2024-04-27 05:03:02.934947] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:33.242 [2024-04-27 05:03:02.935457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.242 [2024-04-27 05:03:03.110790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.499 [2024-04-27 05:03:03.239476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.499 [2024-04-27 05:03:03.322257] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.067 05:03:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:34.067 05:03:03 -- common/autotest_common.sh@852 -- # return 0 00:20:34.068 05:03:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:34.325 [2024-04-27 05:03:04.185006] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.325 [2024-04-27 05:03:04.185747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.325 [2024-04-27 05:03:04.185910] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.325 [2024-04-27 05:03:04.186105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.325 [2024-04-27 05:03:04.186244] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.325 [2024-04-27 05:03:04.186468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.325 [2024-04-27 05:03:04.186598] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:34.325 [2024-04-27 05:03:04.186880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.325 05:03:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.582 05:03:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.582 "name": "Existed_Raid", 00:20:34.582 "uuid": "6b60109c-6727-4796-8e25-cea194aa9345", 00:20:34.582 "strip_size_kb": 0, 00:20:34.582 "state": "configuring", 00:20:34.582 "raid_level": "raid1", 00:20:34.582 "superblock": true, 00:20:34.582 "num_base_bdevs": 4, 00:20:34.582 "num_base_bdevs_discovered": 0, 00:20:34.582 "num_base_bdevs_operational": 4, 00:20:34.582 "base_bdevs_list": [ 00:20:34.582 { 00:20:34.582 "name": "BaseBdev1", 00:20:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.582 "is_configured": false, 00:20:34.582 "data_offset": 0, 00:20:34.582 "data_size": 0 00:20:34.582 }, 00:20:34.582 { 00:20:34.582 "name": "BaseBdev2", 00:20:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.582 "is_configured": false, 00:20:34.582 "data_offset": 0, 00:20:34.582 "data_size": 0 00:20:34.582 }, 00:20:34.582 { 00:20:34.582 "name": "BaseBdev3", 00:20:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.582 "is_configured": false, 00:20:34.582 "data_offset": 0, 00:20:34.582 "data_size": 0 00:20:34.582 }, 00:20:34.582 { 00:20:34.582 "name": "BaseBdev4", 00:20:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.582 "is_configured": false, 00:20:34.582 "data_offset": 0, 00:20:34.582 "data_size": 0 00:20:34.582 } 00:20:34.582 ] 00:20:34.582 }' 00:20:34.582 05:03:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.582 05:03:04 -- common/autotest_common.sh@10 -- # set +x 00:20:35.514 05:03:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:35.514 [2024-04-27 05:03:05.305529] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.514 [2024-04-27 05:03:05.305798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:35.514 05:03:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:35.772 [2024-04-27 05:03:05.581638] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.772 [2024-04-27 05:03:05.582533] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.772 [2024-04-27 05:03:05.582680] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.772 [2024-04-27 05:03:05.582873] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.773 [2024-04-27 05:03:05.583056] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:35.773 [2024-04-27 05:03:05.583274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:35.773 [2024-04-27 05:03:05.583404] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:35.773 [2024-04-27 05:03:05.583589] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:35.773 05:03:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:36.031 [2024-04-27 05:03:05.877201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.031 BaseBdev1 00:20:36.031 05:03:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:36.031 05:03:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:36.031 05:03:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:36.031 05:03:05 -- common/autotest_common.sh@889 -- # local i 00:20:36.031 05:03:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:36.031 05:03:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:36.031 05:03:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.302 05:03:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:36.578 [ 00:20:36.578 { 00:20:36.578 "name": "BaseBdev1", 00:20:36.578 "aliases": [ 00:20:36.578 "2fbd1ccb-986a-4f0c-a785-d9588324bc6a" 00:20:36.578 ], 00:20:36.578 "product_name": "Malloc disk", 00:20:36.578 "block_size": 512, 00:20:36.578 "num_blocks": 65536, 00:20:36.578 "uuid": "2fbd1ccb-986a-4f0c-a785-d9588324bc6a", 00:20:36.578 "assigned_rate_limits": { 00:20:36.578 "rw_ios_per_sec": 0, 00:20:36.578 "rw_mbytes_per_sec": 0, 00:20:36.578 "r_mbytes_per_sec": 0, 00:20:36.578 "w_mbytes_per_sec": 0 00:20:36.578 }, 00:20:36.578 "claimed": true, 00:20:36.578 "claim_type": "exclusive_write", 00:20:36.578 "zoned": false, 00:20:36.578 "supported_io_types": { 00:20:36.578 "read": true, 00:20:36.578 "write": true, 00:20:36.578 "unmap": true, 00:20:36.578 "write_zeroes": true, 00:20:36.578 "flush": true, 00:20:36.578 "reset": true, 00:20:36.578 "compare": false, 00:20:36.578 "compare_and_write": false, 00:20:36.578 "abort": true, 00:20:36.578 "nvme_admin": false, 00:20:36.578 "nvme_io": false 00:20:36.578 }, 00:20:36.578 "memory_domains": [ 00:20:36.578 { 00:20:36.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.578 "dma_device_type": 2 00:20:36.578 } 00:20:36.578 ], 00:20:36.578 "driver_specific": {} 00:20:36.578 } 00:20:36.578 ] 00:20:36.578 05:03:06 -- common/autotest_common.sh@895 -- # return 0 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.578 05:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.837 05:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.837 "name": "Existed_Raid", 00:20:36.837 "uuid": "035cad1c-2a2c-4de1-a76c-9b02c16362cf", 00:20:36.837 "strip_size_kb": 0, 00:20:36.837 "state": "configuring", 00:20:36.837 "raid_level": "raid1", 00:20:36.837 "superblock": true, 00:20:36.837 "num_base_bdevs": 4, 00:20:36.837 "num_base_bdevs_discovered": 1, 00:20:36.837 "num_base_bdevs_operational": 4, 00:20:36.837 "base_bdevs_list": [ 00:20:36.837 { 00:20:36.837 "name": "BaseBdev1", 00:20:36.837 "uuid": "2fbd1ccb-986a-4f0c-a785-d9588324bc6a", 00:20:36.837 "is_configured": true, 00:20:36.837 "data_offset": 2048, 00:20:36.837 "data_size": 63488 00:20:36.837 }, 00:20:36.837 { 00:20:36.837 "name": "BaseBdev2", 00:20:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.837 "is_configured": false, 00:20:36.837 "data_offset": 0, 00:20:36.837 "data_size": 0 00:20:36.837 }, 00:20:36.837 { 00:20:36.837 "name": "BaseBdev3", 00:20:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.837 "is_configured": false, 00:20:36.837 "data_offset": 0, 00:20:36.837 "data_size": 0 00:20:36.837 }, 00:20:36.837 { 00:20:36.837 "name": "BaseBdev4", 00:20:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.837 "is_configured": false, 00:20:36.837 "data_offset": 0, 00:20:36.837 "data_size": 0 00:20:36.837 } 00:20:36.837 ] 00:20:36.837 }' 00:20:36.837 05:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.837 05:03:06 -- common/autotest_common.sh@10 -- # set +x 00:20:37.404 05:03:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:37.663 [2024-04-27 05:03:07.497768] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.663 [2024-04-27 05:03:07.498184] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:37.663 05:03:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:37.663 05:03:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:38.230 05:03:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.230 BaseBdev1 00:20:38.230 05:03:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:38.230 05:03:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:38.230 05:03:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:38.230 05:03:08 -- common/autotest_common.sh@889 -- # local i 00:20:38.230 05:03:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:38.230 05:03:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:38.230 05:03:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.489 05:03:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:38.747 [ 00:20:38.748 { 00:20:38.748 "name": "BaseBdev1", 00:20:38.748 "aliases": [ 00:20:38.748 "599cc7cd-5f74-4f07-8232-2d084e793442" 00:20:38.748 ], 00:20:38.748 "product_name": "Malloc disk", 00:20:38.748 "block_size": 512, 00:20:38.748 "num_blocks": 65536, 00:20:38.748 "uuid": "599cc7cd-5f74-4f07-8232-2d084e793442", 00:20:38.748 "assigned_rate_limits": { 00:20:38.748 "rw_ios_per_sec": 0, 00:20:38.748 "rw_mbytes_per_sec": 0, 00:20:38.748 "r_mbytes_per_sec": 0, 00:20:38.748 "w_mbytes_per_sec": 0 00:20:38.748 }, 00:20:38.748 "claimed": false, 00:20:38.748 "zoned": false, 00:20:38.748 "supported_io_types": { 00:20:38.748 "read": true, 00:20:38.748 "write": true, 00:20:38.748 "unmap": true, 00:20:38.748 "write_zeroes": true, 00:20:38.748 "flush": true, 00:20:38.748 "reset": true, 00:20:38.748 "compare": false, 00:20:38.748 "compare_and_write": false, 00:20:38.748 "abort": true, 00:20:38.748 "nvme_admin": false, 00:20:38.748 "nvme_io": false 00:20:38.748 }, 00:20:38.748 "memory_domains": [ 00:20:38.748 { 00:20:38.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.748 "dma_device_type": 2 00:20:38.748 } 00:20:38.748 ], 00:20:38.748 "driver_specific": {} 00:20:38.748 } 00:20:38.748 ] 00:20:38.748 05:03:08 -- common/autotest_common.sh@895 -- # return 0 00:20:38.748 05:03:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:39.006 [2024-04-27 05:03:08.811017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.006 [2024-04-27 05:03:08.813963] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.006 [2024-04-27 05:03:08.814243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.006 [2024-04-27 05:03:08.814366] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:39.006 [2024-04-27 05:03:08.814442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:39.006 [2024-04-27 05:03:08.814548] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:39.006 [2024-04-27 05:03:08.814709] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:39.006 05:03:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:39.006 05:03:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.007 05:03:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.265 05:03:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.265 "name": "Existed_Raid", 00:20:39.265 "uuid": "b660d81d-49a7-47d4-a6bd-883b5f09e05b", 00:20:39.265 "strip_size_kb": 0, 00:20:39.265 "state": "configuring", 00:20:39.265 "raid_level": "raid1", 00:20:39.265 "superblock": true, 00:20:39.265 "num_base_bdevs": 4, 00:20:39.265 "num_base_bdevs_discovered": 1, 00:20:39.265 "num_base_bdevs_operational": 4, 00:20:39.265 "base_bdevs_list": [ 00:20:39.265 { 00:20:39.265 "name": "BaseBdev1", 00:20:39.265 "uuid": "599cc7cd-5f74-4f07-8232-2d084e793442", 00:20:39.265 "is_configured": true, 00:20:39.265 "data_offset": 2048, 00:20:39.265 "data_size": 63488 00:20:39.265 }, 00:20:39.265 { 00:20:39.265 "name": "BaseBdev2", 00:20:39.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.265 "is_configured": false, 00:20:39.265 "data_offset": 0, 00:20:39.265 "data_size": 0 00:20:39.265 }, 00:20:39.265 { 00:20:39.265 "name": "BaseBdev3", 00:20:39.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.265 "is_configured": false, 00:20:39.265 "data_offset": 0, 00:20:39.265 "data_size": 0 00:20:39.265 }, 00:20:39.265 { 00:20:39.265 "name": "BaseBdev4", 00:20:39.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.265 "is_configured": false, 00:20:39.265 "data_offset": 0, 00:20:39.265 "data_size": 0 00:20:39.265 } 00:20:39.265 ] 00:20:39.265 }' 00:20:39.265 05:03:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.265 05:03:09 -- common/autotest_common.sh@10 -- # set +x 00:20:40.203 05:03:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:40.203 [2024-04-27 05:03:10.022148] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.203 BaseBdev2 00:20:40.203 05:03:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:40.203 05:03:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:40.203 05:03:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:40.203 05:03:10 -- common/autotest_common.sh@889 -- # local i 00:20:40.203 05:03:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:40.203 05:03:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:40.203 05:03:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:40.462 05:03:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:40.722 [ 00:20:40.722 { 00:20:40.722 "name": "BaseBdev2", 00:20:40.722 "aliases": [ 00:20:40.722 "40aaade7-d5d7-43fa-88c5-314bb043cd28" 00:20:40.722 ], 00:20:40.722 "product_name": "Malloc disk", 00:20:40.722 "block_size": 512, 00:20:40.722 "num_blocks": 65536, 00:20:40.722 "uuid": "40aaade7-d5d7-43fa-88c5-314bb043cd28", 00:20:40.722 "assigned_rate_limits": { 00:20:40.722 "rw_ios_per_sec": 0, 00:20:40.722 "rw_mbytes_per_sec": 0, 00:20:40.722 "r_mbytes_per_sec": 0, 00:20:40.722 "w_mbytes_per_sec": 0 00:20:40.722 }, 00:20:40.722 "claimed": true, 00:20:40.722 "claim_type": "exclusive_write", 00:20:40.722 "zoned": false, 00:20:40.722 "supported_io_types": { 00:20:40.722 "read": true, 00:20:40.722 "write": true, 00:20:40.722 "unmap": true, 00:20:40.722 "write_zeroes": true, 00:20:40.722 "flush": true, 00:20:40.722 "reset": true, 00:20:40.722 "compare": false, 00:20:40.722 "compare_and_write": false, 00:20:40.722 "abort": true, 00:20:40.722 "nvme_admin": false, 00:20:40.722 "nvme_io": false 00:20:40.722 }, 00:20:40.722 "memory_domains": [ 00:20:40.722 { 00:20:40.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.722 "dma_device_type": 2 00:20:40.722 } 00:20:40.722 ], 00:20:40.722 "driver_specific": {} 00:20:40.722 } 00:20:40.722 ] 00:20:40.722 05:03:10 -- common/autotest_common.sh@895 -- # return 0 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.722 05:03:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.981 05:03:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.981 "name": "Existed_Raid", 00:20:40.981 "uuid": "b660d81d-49a7-47d4-a6bd-883b5f09e05b", 00:20:40.981 "strip_size_kb": 0, 00:20:40.981 "state": "configuring", 00:20:40.981 "raid_level": "raid1", 00:20:40.981 "superblock": true, 00:20:40.981 "num_base_bdevs": 4, 00:20:40.981 "num_base_bdevs_discovered": 2, 00:20:40.981 "num_base_bdevs_operational": 4, 00:20:40.981 "base_bdevs_list": [ 00:20:40.981 { 00:20:40.981 "name": "BaseBdev1", 00:20:40.981 "uuid": "599cc7cd-5f74-4f07-8232-2d084e793442", 00:20:40.981 "is_configured": true, 00:20:40.981 "data_offset": 2048, 00:20:40.981 "data_size": 63488 00:20:40.981 }, 00:20:40.981 { 00:20:40.981 "name": "BaseBdev2", 00:20:40.981 "uuid": "40aaade7-d5d7-43fa-88c5-314bb043cd28", 00:20:40.981 "is_configured": true, 00:20:40.981 "data_offset": 2048, 00:20:40.981 "data_size": 63488 00:20:40.981 }, 00:20:40.981 { 00:20:40.981 "name": "BaseBdev3", 00:20:40.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.981 "is_configured": false, 00:20:40.981 "data_offset": 0, 00:20:40.981 "data_size": 0 00:20:40.981 }, 00:20:40.981 { 00:20:40.981 "name": "BaseBdev4", 00:20:40.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.981 "is_configured": false, 00:20:40.981 "data_offset": 0, 00:20:40.981 "data_size": 0 00:20:40.981 } 00:20:40.981 ] 00:20:40.981 }' 00:20:40.981 05:03:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.981 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.548 05:03:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:41.806 [2024-04-27 05:03:11.675694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.806 BaseBdev3 00:20:41.806 05:03:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:41.806 05:03:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:41.806 05:03:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:41.806 05:03:11 -- common/autotest_common.sh@889 -- # local i 00:20:41.806 05:03:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:41.806 05:03:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:41.806 05:03:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.372 05:03:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:42.372 [ 00:20:42.372 { 00:20:42.372 "name": "BaseBdev3", 00:20:42.372 "aliases": [ 00:20:42.372 "210df138-cf3c-4dce-9252-0cdd7ba8a037" 00:20:42.372 ], 00:20:42.372 "product_name": "Malloc disk", 00:20:42.372 "block_size": 512, 00:20:42.372 "num_blocks": 65536, 00:20:42.372 "uuid": "210df138-cf3c-4dce-9252-0cdd7ba8a037", 00:20:42.372 "assigned_rate_limits": { 00:20:42.372 "rw_ios_per_sec": 0, 00:20:42.372 "rw_mbytes_per_sec": 0, 00:20:42.372 "r_mbytes_per_sec": 0, 00:20:42.372 "w_mbytes_per_sec": 0 00:20:42.372 }, 00:20:42.372 "claimed": true, 00:20:42.372 "claim_type": "exclusive_write", 00:20:42.372 "zoned": false, 00:20:42.372 "supported_io_types": { 00:20:42.372 "read": true, 00:20:42.372 "write": true, 00:20:42.372 "unmap": true, 00:20:42.372 "write_zeroes": true, 00:20:42.372 "flush": true, 00:20:42.372 "reset": true, 00:20:42.372 "compare": false, 00:20:42.372 "compare_and_write": false, 00:20:42.372 "abort": true, 00:20:42.372 "nvme_admin": false, 00:20:42.372 "nvme_io": false 00:20:42.372 }, 00:20:42.372 "memory_domains": [ 00:20:42.372 { 00:20:42.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.372 "dma_device_type": 2 00:20:42.372 } 00:20:42.372 ], 00:20:42.372 "driver_specific": {} 00:20:42.372 } 00:20:42.372 ] 00:20:42.372 05:03:12 -- common/autotest_common.sh@895 -- # return 0 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.372 05:03:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.630 05:03:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.630 "name": "Existed_Raid", 00:20:42.630 "uuid": "b660d81d-49a7-47d4-a6bd-883b5f09e05b", 00:20:42.630 "strip_size_kb": 0, 00:20:42.630 "state": "configuring", 00:20:42.630 "raid_level": "raid1", 00:20:42.630 "superblock": true, 00:20:42.630 "num_base_bdevs": 4, 00:20:42.630 "num_base_bdevs_discovered": 3, 00:20:42.630 "num_base_bdevs_operational": 4, 00:20:42.630 "base_bdevs_list": [ 00:20:42.630 { 00:20:42.630 "name": "BaseBdev1", 00:20:42.630 "uuid": "599cc7cd-5f74-4f07-8232-2d084e793442", 00:20:42.630 "is_configured": true, 00:20:42.630 "data_offset": 2048, 00:20:42.630 "data_size": 63488 00:20:42.630 }, 00:20:42.630 { 00:20:42.630 "name": "BaseBdev2", 00:20:42.630 "uuid": "40aaade7-d5d7-43fa-88c5-314bb043cd28", 00:20:42.630 "is_configured": true, 00:20:42.630 "data_offset": 2048, 00:20:42.630 "data_size": 63488 00:20:42.630 }, 00:20:42.630 { 00:20:42.630 "name": "BaseBdev3", 00:20:42.630 "uuid": "210df138-cf3c-4dce-9252-0cdd7ba8a037", 00:20:42.630 "is_configured": true, 00:20:42.630 "data_offset": 2048, 00:20:42.630 "data_size": 63488 00:20:42.630 }, 00:20:42.630 { 00:20:42.630 "name": "BaseBdev4", 00:20:42.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.630 "is_configured": false, 00:20:42.630 "data_offset": 0, 00:20:42.630 "data_size": 0 00:20:42.630 } 00:20:42.630 ] 00:20:42.630 }' 00:20:42.630 05:03:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.630 05:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:43.563 05:03:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:43.563 [2024-04-27 05:03:13.383418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:43.563 [2024-04-27 05:03:13.384099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:43.563 [2024-04-27 05:03:13.384251] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:43.563 [2024-04-27 05:03:13.384505] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:43.563 [2024-04-27 05:03:13.385214] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:43.563 [2024-04-27 05:03:13.385361] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:43.563 BaseBdev4 00:20:43.563 [2024-04-27 05:03:13.385708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.563 05:03:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:43.563 05:03:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:43.563 05:03:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:43.563 05:03:13 -- common/autotest_common.sh@889 -- # local i 00:20:43.563 05:03:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:43.563 05:03:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:43.563 05:03:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.821 05:03:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:44.080 [ 00:20:44.080 { 00:20:44.080 "name": "BaseBdev4", 00:20:44.080 "aliases": [ 00:20:44.080 "465e09c3-f280-4105-aa93-dcb4485f62b5" 00:20:44.080 ], 00:20:44.080 "product_name": "Malloc disk", 00:20:44.080 "block_size": 512, 00:20:44.080 "num_blocks": 65536, 00:20:44.080 "uuid": "465e09c3-f280-4105-aa93-dcb4485f62b5", 00:20:44.080 "assigned_rate_limits": { 00:20:44.080 "rw_ios_per_sec": 0, 00:20:44.080 "rw_mbytes_per_sec": 0, 00:20:44.080 "r_mbytes_per_sec": 0, 00:20:44.080 "w_mbytes_per_sec": 0 00:20:44.080 }, 00:20:44.080 "claimed": true, 00:20:44.080 "claim_type": "exclusive_write", 00:20:44.080 "zoned": false, 00:20:44.080 "supported_io_types": { 00:20:44.080 "read": true, 00:20:44.080 "write": true, 00:20:44.080 "unmap": true, 00:20:44.080 "write_zeroes": true, 00:20:44.080 "flush": true, 00:20:44.080 "reset": true, 00:20:44.080 "compare": false, 00:20:44.080 "compare_and_write": false, 00:20:44.080 "abort": true, 00:20:44.080 "nvme_admin": false, 00:20:44.080 "nvme_io": false 00:20:44.080 }, 00:20:44.080 "memory_domains": [ 00:20:44.080 { 00:20:44.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.080 "dma_device_type": 2 00:20:44.080 } 00:20:44.080 ], 00:20:44.080 "driver_specific": {} 00:20:44.080 } 00:20:44.080 ] 00:20:44.080 05:03:13 -- common/autotest_common.sh@895 -- # return 0 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.080 05:03:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.339 05:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.339 "name": "Existed_Raid", 00:20:44.339 "uuid": "b660d81d-49a7-47d4-a6bd-883b5f09e05b", 00:20:44.339 "strip_size_kb": 0, 00:20:44.339 "state": "online", 00:20:44.339 "raid_level": "raid1", 00:20:44.339 "superblock": true, 00:20:44.339 "num_base_bdevs": 4, 00:20:44.339 "num_base_bdevs_discovered": 4, 00:20:44.339 "num_base_bdevs_operational": 4, 00:20:44.339 "base_bdevs_list": [ 00:20:44.339 { 00:20:44.339 "name": "BaseBdev1", 00:20:44.339 "uuid": "599cc7cd-5f74-4f07-8232-2d084e793442", 00:20:44.339 "is_configured": true, 00:20:44.339 "data_offset": 2048, 00:20:44.339 "data_size": 63488 00:20:44.339 }, 00:20:44.339 { 00:20:44.339 "name": "BaseBdev2", 00:20:44.339 "uuid": "40aaade7-d5d7-43fa-88c5-314bb043cd28", 00:20:44.339 "is_configured": true, 00:20:44.339 "data_offset": 2048, 00:20:44.339 "data_size": 63488 00:20:44.339 }, 00:20:44.339 { 00:20:44.339 "name": "BaseBdev3", 00:20:44.339 "uuid": "210df138-cf3c-4dce-9252-0cdd7ba8a037", 00:20:44.339 "is_configured": true, 00:20:44.339 "data_offset": 2048, 00:20:44.339 "data_size": 63488 00:20:44.339 }, 00:20:44.339 { 00:20:44.339 "name": "BaseBdev4", 00:20:44.339 "uuid": "465e09c3-f280-4105-aa93-dcb4485f62b5", 00:20:44.339 "is_configured": true, 00:20:44.339 "data_offset": 2048, 00:20:44.339 "data_size": 63488 00:20:44.339 } 00:20:44.339 ] 00:20:44.339 }' 00:20:44.339 05:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.339 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.906 05:03:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:45.179 [2024-04-27 05:03:15.016001] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.179 05:03:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.461 05:03:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.461 "name": "Existed_Raid", 00:20:45.461 "uuid": "b660d81d-49a7-47d4-a6bd-883b5f09e05b", 00:20:45.461 "strip_size_kb": 0, 00:20:45.461 "state": "online", 00:20:45.461 "raid_level": "raid1", 00:20:45.461 "superblock": true, 00:20:45.461 "num_base_bdevs": 4, 00:20:45.461 "num_base_bdevs_discovered": 3, 00:20:45.461 "num_base_bdevs_operational": 3, 00:20:45.461 "base_bdevs_list": [ 00:20:45.461 { 00:20:45.461 "name": null, 00:20:45.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.461 "is_configured": false, 00:20:45.461 "data_offset": 2048, 00:20:45.461 "data_size": 63488 00:20:45.461 }, 00:20:45.461 { 00:20:45.461 "name": "BaseBdev2", 00:20:45.461 "uuid": "40aaade7-d5d7-43fa-88c5-314bb043cd28", 00:20:45.461 "is_configured": true, 00:20:45.461 "data_offset": 2048, 00:20:45.461 "data_size": 63488 00:20:45.461 }, 00:20:45.461 { 00:20:45.461 "name": "BaseBdev3", 00:20:45.461 "uuid": "210df138-cf3c-4dce-9252-0cdd7ba8a037", 00:20:45.461 "is_configured": true, 00:20:45.461 "data_offset": 2048, 00:20:45.461 "data_size": 63488 00:20:45.461 }, 00:20:45.461 { 00:20:45.461 "name": "BaseBdev4", 00:20:45.461 "uuid": "465e09c3-f280-4105-aa93-dcb4485f62b5", 00:20:45.461 "is_configured": true, 00:20:45.461 "data_offset": 2048, 00:20:45.461 "data_size": 63488 00:20:45.461 } 00:20:45.461 ] 00:20:45.461 }' 00:20:45.461 05:03:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.461 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:46.029 05:03:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:46.029 05:03:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:46.029 05:03:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.029 05:03:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:46.595 [2024-04-27 05:03:16.453842] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.595 05:03:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:46.854 05:03:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:46.854 05:03:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.854 05:03:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:47.112 [2024-04-27 05:03:16.988253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:47.371 05:03:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:47.629 [2024-04-27 05:03:17.485065] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:47.629 [2024-04-27 05:03:17.485400] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.629 [2024-04-27 05:03:17.485617] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.629 [2024-04-27 05:03:17.507624] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.629 [2024-04-27 05:03:17.508071] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:47.629 05:03:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:47.629 05:03:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:47.629 05:03:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.629 05:03:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:48.196 05:03:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:48.196 05:03:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:48.196 05:03:17 -- bdev/bdev_raid.sh@287 -- # killprocess 133589 00:20:48.196 05:03:17 -- common/autotest_common.sh@926 -- # '[' -z 133589 ']' 00:20:48.196 05:03:17 -- common/autotest_common.sh@930 -- # kill -0 133589 00:20:48.196 05:03:17 -- common/autotest_common.sh@931 -- # uname 00:20:48.196 05:03:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:48.196 05:03:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133589 00:20:48.196 killing process with pid 133589 00:20:48.196 05:03:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:48.196 05:03:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:48.196 05:03:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133589' 00:20:48.196 05:03:17 -- common/autotest_common.sh@945 -- # kill 133589 00:20:48.196 05:03:17 -- common/autotest_common.sh@950 -- # wait 133589 00:20:48.196 [2024-04-27 05:03:17.826732] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.196 [2024-04-27 05:03:17.826849] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:48.454 00:20:48.454 real 0m15.315s 00:20:48.454 user 0m27.938s 00:20:48.454 sys 0m2.103s 00:20:48.454 05:03:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.454 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.454 ************************************ 00:20:48.454 END TEST raid_state_function_test_sb 00:20:48.454 ************************************ 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:48.454 05:03:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:48.454 05:03:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:48.454 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.454 ************************************ 00:20:48.454 START TEST raid_superblock_test 00:20:48.454 ************************************ 00:20:48.454 05:03:18 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=134040 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:48.454 05:03:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134040 /var/tmp/spdk-raid.sock 00:20:48.454 05:03:18 -- common/autotest_common.sh@819 -- # '[' -z 134040 ']' 00:20:48.454 05:03:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:48.454 05:03:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:48.454 05:03:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:48.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:48.454 05:03:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:48.454 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.454 [2024-04-27 05:03:18.294091] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:48.454 [2024-04-27 05:03:18.294614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134040 ] 00:20:48.712 [2024-04-27 05:03:18.451174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.712 [2024-04-27 05:03:18.571303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.971 [2024-04-27 05:03:18.647363] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.538 05:03:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.538 05:03:19 -- common/autotest_common.sh@852 -- # return 0 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:49.538 05:03:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:49.797 malloc1 00:20:49.797 05:03:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:50.056 [2024-04-27 05:03:19.828806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:50.056 [2024-04-27 05:03:19.829729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.056 [2024-04-27 05:03:19.830053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:50.056 [2024-04-27 05:03:19.830386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.056 [2024-04-27 05:03:19.833617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.056 [2024-04-27 05:03:19.833925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:50.056 pt1 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.056 05:03:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:50.314 malloc2 00:20:50.314 05:03:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.573 [2024-04-27 05:03:20.309123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.573 [2024-04-27 05:03:20.309810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.573 [2024-04-27 05:03:20.310115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:50.573 [2024-04-27 05:03:20.310445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.573 [2024-04-27 05:03:20.313486] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.573 [2024-04-27 05:03:20.313781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.573 pt2 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.573 05:03:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:50.831 malloc3 00:20:50.831 05:03:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:51.091 [2024-04-27 05:03:20.842965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:51.091 [2024-04-27 05:03:20.843745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.091 [2024-04-27 05:03:20.844045] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:51.091 [2024-04-27 05:03:20.844345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.091 [2024-04-27 05:03:20.847416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.091 [2024-04-27 05:03:20.847702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:51.091 pt3 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:51.091 05:03:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:51.349 malloc4 00:20:51.349 05:03:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:51.605 [2024-04-27 05:03:21.363199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:51.605 [2024-04-27 05:03:21.363875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.605 [2024-04-27 05:03:21.364173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:51.605 [2024-04-27 05:03:21.364479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.605 [2024-04-27 05:03:21.367560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.605 [2024-04-27 05:03:21.367846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:51.605 pt4 00:20:51.605 05:03:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:51.605 05:03:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:51.605 05:03:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:51.863 [2024-04-27 05:03:21.596540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:51.863 [2024-04-27 05:03:21.599382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.863 [2024-04-27 05:03:21.599640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:51.863 [2024-04-27 05:03:21.599756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:51.863 [2024-04-27 05:03:21.600170] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:51.863 [2024-04-27 05:03:21.600295] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:51.863 [2024-04-27 05:03:21.600570] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:51.863 [2024-04-27 05:03:21.601150] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:51.863 [2024-04-27 05:03:21.601267] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:51.863 [2024-04-27 05:03:21.601612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.863 05:03:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.120 05:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.120 "name": "raid_bdev1", 00:20:52.120 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:20:52.120 "strip_size_kb": 0, 00:20:52.120 "state": "online", 00:20:52.120 "raid_level": "raid1", 00:20:52.120 "superblock": true, 00:20:52.120 "num_base_bdevs": 4, 00:20:52.120 "num_base_bdevs_discovered": 4, 00:20:52.120 "num_base_bdevs_operational": 4, 00:20:52.120 "base_bdevs_list": [ 00:20:52.120 { 00:20:52.120 "name": "pt1", 00:20:52.120 "uuid": "9e4bf3b5-e98c-57d8-a024-43dd3c911df2", 00:20:52.121 "is_configured": true, 00:20:52.121 "data_offset": 2048, 00:20:52.121 "data_size": 63488 00:20:52.121 }, 00:20:52.121 { 00:20:52.121 "name": "pt2", 00:20:52.121 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:20:52.121 "is_configured": true, 00:20:52.121 "data_offset": 2048, 00:20:52.121 "data_size": 63488 00:20:52.121 }, 00:20:52.121 { 00:20:52.121 "name": "pt3", 00:20:52.121 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:20:52.121 "is_configured": true, 00:20:52.121 "data_offset": 2048, 00:20:52.121 "data_size": 63488 00:20:52.121 }, 00:20:52.121 { 00:20:52.121 "name": "pt4", 00:20:52.121 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:20:52.121 "is_configured": true, 00:20:52.121 "data_offset": 2048, 00:20:52.121 "data_size": 63488 00:20:52.121 } 00:20:52.121 ] 00:20:52.121 }' 00:20:52.121 05:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.121 05:03:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.686 05:03:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:52.686 05:03:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:52.944 [2024-04-27 05:03:22.742146] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.944 05:03:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5a73ad88-9640-473a-b659-f2c54727f9fc 00:20:52.944 05:03:22 -- bdev/bdev_raid.sh@380 -- # '[' -z 5a73ad88-9640-473a-b659-f2c54727f9fc ']' 00:20:52.944 05:03:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:53.201 [2024-04-27 05:03:23.009917] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.201 [2024-04-27 05:03:23.010238] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.201 [2024-04-27 05:03:23.010517] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.201 [2024-04-27 05:03:23.010793] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.201 [2024-04-27 05:03:23.010919] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:53.201 05:03:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.201 05:03:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:53.459 05:03:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:53.459 05:03:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:53.459 05:03:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.459 05:03:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:53.716 05:03:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.716 05:03:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:53.974 05:03:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.974 05:03:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:54.231 05:03:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:54.231 05:03:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:54.516 05:03:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:54.516 05:03:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:54.774 05:03:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:54.774 05:03:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:54.774 05:03:24 -- common/autotest_common.sh@640 -- # local es=0 00:20:54.774 05:03:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:54.774 05:03:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.774 05:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:54.774 05:03:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.774 05:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:54.774 05:03:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.774 05:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:54.774 05:03:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.774 05:03:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:54.774 05:03:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:55.041 [2024-04-27 05:03:24.870312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:55.041 [2024-04-27 05:03:24.873077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:55.041 [2024-04-27 05:03:24.873279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:55.041 [2024-04-27 05:03:24.873469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:55.041 [2024-04-27 05:03:24.873647] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:55.041 [2024-04-27 05:03:24.874468] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:55.041 [2024-04-27 05:03:24.874785] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:55.041 [2024-04-27 05:03:24.875109] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:55.041 [2024-04-27 05:03:24.875381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.041 [2024-04-27 05:03:24.875515] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:20:55.041 request: 00:20:55.041 { 00:20:55.041 "name": "raid_bdev1", 00:20:55.041 "raid_level": "raid1", 00:20:55.041 "base_bdevs": [ 00:20:55.041 "malloc1", 00:20:55.041 "malloc2", 00:20:55.041 "malloc3", 00:20:55.041 "malloc4" 00:20:55.041 ], 00:20:55.041 "superblock": false, 00:20:55.041 "method": "bdev_raid_create", 00:20:55.041 "req_id": 1 00:20:55.041 } 00:20:55.041 Got JSON-RPC error response 00:20:55.041 response: 00:20:55.041 { 00:20:55.041 "code": -17, 00:20:55.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:55.041 } 00:20:55.041 05:03:24 -- common/autotest_common.sh@643 -- # es=1 00:20:55.041 05:03:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:55.041 05:03:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:55.041 05:03:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:55.041 05:03:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.041 05:03:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:55.309 05:03:25 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:55.309 05:03:25 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:55.309 05:03:25 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:55.566 [2024-04-27 05:03:25.392183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:55.566 [2024-04-27 05:03:25.392907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.566 [2024-04-27 05:03:25.393213] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:55.566 [2024-04-27 05:03:25.393512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.566 [2024-04-27 05:03:25.396640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.567 [2024-04-27 05:03:25.396965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:55.567 [2024-04-27 05:03:25.397340] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:55.567 [2024-04-27 05:03:25.397539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:55.567 pt1 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.567 05:03:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.824 05:03:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.824 "name": "raid_bdev1", 00:20:55.824 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:20:55.824 "strip_size_kb": 0, 00:20:55.824 "state": "configuring", 00:20:55.824 "raid_level": "raid1", 00:20:55.824 "superblock": true, 00:20:55.824 "num_base_bdevs": 4, 00:20:55.824 "num_base_bdevs_discovered": 1, 00:20:55.824 "num_base_bdevs_operational": 4, 00:20:55.824 "base_bdevs_list": [ 00:20:55.824 { 00:20:55.824 "name": "pt1", 00:20:55.824 "uuid": "9e4bf3b5-e98c-57d8-a024-43dd3c911df2", 00:20:55.824 "is_configured": true, 00:20:55.824 "data_offset": 2048, 00:20:55.824 "data_size": 63488 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": null, 00:20:55.824 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:20:55.824 "is_configured": false, 00:20:55.824 "data_offset": 2048, 00:20:55.824 "data_size": 63488 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": null, 00:20:55.824 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:20:55.824 "is_configured": false, 00:20:55.824 "data_offset": 2048, 00:20:55.824 "data_size": 63488 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": null, 00:20:55.824 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:20:55.824 "is_configured": false, 00:20:55.824 "data_offset": 2048, 00:20:55.824 "data_size": 63488 00:20:55.824 } 00:20:55.824 ] 00:20:55.824 }' 00:20:55.824 05:03:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.824 05:03:25 -- common/autotest_common.sh@10 -- # set +x 00:20:56.391 05:03:26 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:56.391 05:03:26 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:56.650 [2024-04-27 05:03:26.509250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:56.650 [2024-04-27 05:03:26.509679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.650 [2024-04-27 05:03:26.509800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:56.650 [2024-04-27 05:03:26.510030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.650 [2024-04-27 05:03:26.510684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.650 [2024-04-27 05:03:26.510872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:56.650 [2024-04-27 05:03:26.511127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:56.650 [2024-04-27 05:03:26.511279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:56.650 pt2 00:20:56.650 05:03:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:56.909 [2024-04-27 05:03:26.753338] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:56.909 05:03:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:56.909 05:03:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.909 05:03:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:56.909 05:03:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.909 05:03:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.910 05:03:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.168 05:03:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.168 "name": "raid_bdev1", 00:20:57.168 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:20:57.168 "strip_size_kb": 0, 00:20:57.168 "state": "configuring", 00:20:57.168 "raid_level": "raid1", 00:20:57.168 "superblock": true, 00:20:57.168 "num_base_bdevs": 4, 00:20:57.168 "num_base_bdevs_discovered": 1, 00:20:57.168 "num_base_bdevs_operational": 4, 00:20:57.168 "base_bdevs_list": [ 00:20:57.168 { 00:20:57.168 "name": "pt1", 00:20:57.168 "uuid": "9e4bf3b5-e98c-57d8-a024-43dd3c911df2", 00:20:57.169 "is_configured": true, 00:20:57.169 "data_offset": 2048, 00:20:57.169 "data_size": 63488 00:20:57.169 }, 00:20:57.169 { 00:20:57.169 "name": null, 00:20:57.169 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:20:57.169 "is_configured": false, 00:20:57.169 "data_offset": 2048, 00:20:57.169 "data_size": 63488 00:20:57.169 }, 00:20:57.169 { 00:20:57.169 "name": null, 00:20:57.169 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:20:57.169 "is_configured": false, 00:20:57.169 "data_offset": 2048, 00:20:57.169 "data_size": 63488 00:20:57.169 }, 00:20:57.169 { 00:20:57.169 "name": null, 00:20:57.169 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:20:57.169 "is_configured": false, 00:20:57.169 "data_offset": 2048, 00:20:57.169 "data_size": 63488 00:20:57.169 } 00:20:57.169 ] 00:20:57.169 }' 00:20:57.169 05:03:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.169 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:20:58.104 05:03:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:58.104 05:03:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:58.104 05:03:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:58.104 [2024-04-27 05:03:27.993615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:58.104 [2024-04-27 05:03:27.993999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.104 [2024-04-27 05:03:27.994100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:58.104 [2024-04-27 05:03:27.994271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.104 [2024-04-27 05:03:27.994916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.104 [2024-04-27 05:03:27.995111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:58.104 [2024-04-27 05:03:27.995343] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:58.104 [2024-04-27 05:03:27.995481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:58.104 pt2 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:58.363 [2024-04-27 05:03:28.229685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:58.363 [2024-04-27 05:03:28.230103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.363 [2024-04-27 05:03:28.230204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:58.363 [2024-04-27 05:03:28.230484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.363 [2024-04-27 05:03:28.231202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.363 [2024-04-27 05:03:28.231404] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:58.363 [2024-04-27 05:03:28.231635] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:58.363 [2024-04-27 05:03:28.231784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:58.363 pt3 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:58.363 05:03:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:58.622 [2024-04-27 05:03:28.469765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:58.622 [2024-04-27 05:03:28.470179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.622 [2024-04-27 05:03:28.470280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:58.622 [2024-04-27 05:03:28.470505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.622 [2024-04-27 05:03:28.471128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.622 [2024-04-27 05:03:28.471322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:58.622 [2024-04-27 05:03:28.471559] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:58.622 [2024-04-27 05:03:28.471698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:58.622 [2024-04-27 05:03:28.472006] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:20:58.622 [2024-04-27 05:03:28.472136] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:58.622 [2024-04-27 05:03:28.472281] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:58.622 [2024-04-27 05:03:28.472798] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:20:58.622 [2024-04-27 05:03:28.472931] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:20:58.622 [2024-04-27 05:03:28.473172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.622 pt4 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.622 05:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.880 05:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.880 "name": "raid_bdev1", 00:20:58.880 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:20:58.880 "strip_size_kb": 0, 00:20:58.880 "state": "online", 00:20:58.880 "raid_level": "raid1", 00:20:58.880 "superblock": true, 00:20:58.880 "num_base_bdevs": 4, 00:20:58.880 "num_base_bdevs_discovered": 4, 00:20:58.880 "num_base_bdevs_operational": 4, 00:20:58.880 "base_bdevs_list": [ 00:20:58.880 { 00:20:58.880 "name": "pt1", 00:20:58.880 "uuid": "9e4bf3b5-e98c-57d8-a024-43dd3c911df2", 00:20:58.880 "is_configured": true, 00:20:58.880 "data_offset": 2048, 00:20:58.880 "data_size": 63488 00:20:58.880 }, 00:20:58.880 { 00:20:58.880 "name": "pt2", 00:20:58.880 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:20:58.880 "is_configured": true, 00:20:58.880 "data_offset": 2048, 00:20:58.880 "data_size": 63488 00:20:58.880 }, 00:20:58.880 { 00:20:58.880 "name": "pt3", 00:20:58.880 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:20:58.880 "is_configured": true, 00:20:58.880 "data_offset": 2048, 00:20:58.880 "data_size": 63488 00:20:58.880 }, 00:20:58.881 { 00:20:58.881 "name": "pt4", 00:20:58.881 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:20:58.881 "is_configured": true, 00:20:58.881 "data_offset": 2048, 00:20:58.881 "data_size": 63488 00:20:58.881 } 00:20:58.881 ] 00:20:58.881 }' 00:20:58.881 05:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.881 05:03:28 -- common/autotest_common.sh@10 -- # set +x 00:20:59.814 05:03:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:59.814 05:03:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:59.814 [2024-04-27 05:03:29.698296] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.072 05:03:29 -- bdev/bdev_raid.sh@430 -- # '[' 5a73ad88-9640-473a-b659-f2c54727f9fc '!=' 5a73ad88-9640-473a-b659-f2c54727f9fc ']' 00:21:00.072 05:03:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:21:00.072 05:03:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:00.072 05:03:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:00.072 05:03:29 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:00.072 [2024-04-27 05:03:29.970199] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.330 05:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.588 05:03:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.588 "name": "raid_bdev1", 00:21:00.588 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:00.588 "strip_size_kb": 0, 00:21:00.588 "state": "online", 00:21:00.588 "raid_level": "raid1", 00:21:00.588 "superblock": true, 00:21:00.588 "num_base_bdevs": 4, 00:21:00.588 "num_base_bdevs_discovered": 3, 00:21:00.588 "num_base_bdevs_operational": 3, 00:21:00.588 "base_bdevs_list": [ 00:21:00.588 { 00:21:00.588 "name": null, 00:21:00.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.588 "is_configured": false, 00:21:00.588 "data_offset": 2048, 00:21:00.588 "data_size": 63488 00:21:00.588 }, 00:21:00.588 { 00:21:00.588 "name": "pt2", 00:21:00.588 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:00.588 "is_configured": true, 00:21:00.588 "data_offset": 2048, 00:21:00.588 "data_size": 63488 00:21:00.588 }, 00:21:00.588 { 00:21:00.588 "name": "pt3", 00:21:00.588 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:00.588 "is_configured": true, 00:21:00.588 "data_offset": 2048, 00:21:00.588 "data_size": 63488 00:21:00.588 }, 00:21:00.588 { 00:21:00.588 "name": "pt4", 00:21:00.588 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:00.588 "is_configured": true, 00:21:00.588 "data_offset": 2048, 00:21:00.588 "data_size": 63488 00:21:00.588 } 00:21:00.588 ] 00:21:00.588 }' 00:21:00.588 05:03:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.588 05:03:30 -- common/autotest_common.sh@10 -- # set +x 00:21:01.152 05:03:30 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:01.410 [2024-04-27 05:03:31.082416] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.410 [2024-04-27 05:03:31.082766] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.410 [2024-04-27 05:03:31.083000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.410 [2024-04-27 05:03:31.083266] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.410 [2024-04-27 05:03:31.083400] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:21:01.410 05:03:31 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.410 05:03:31 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:21:01.668 05:03:31 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:21:01.668 05:03:31 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:21:01.668 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:21:01.668 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:01.668 05:03:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:01.925 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:01.925 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:01.925 05:03:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:02.183 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:02.183 05:03:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.183 05:03:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:02.440 05:03:32 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:02.440 05:03:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.440 05:03:32 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:21:02.440 05:03:32 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:02.440 05:03:32 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.698 [2024-04-27 05:03:32.386674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.698 [2024-04-27 05:03:32.387100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.698 [2024-04-27 05:03:32.387193] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:02.698 [2024-04-27 05:03:32.387468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.698 [2024-04-27 05:03:32.390395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.698 [2024-04-27 05:03:32.390609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.698 [2024-04-27 05:03:32.390864] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:02.698 [2024-04-27 05:03:32.391020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.698 pt2 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.698 05:03:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.956 05:03:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.956 "name": "raid_bdev1", 00:21:02.956 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:02.956 "strip_size_kb": 0, 00:21:02.956 "state": "configuring", 00:21:02.956 "raid_level": "raid1", 00:21:02.956 "superblock": true, 00:21:02.956 "num_base_bdevs": 4, 00:21:02.956 "num_base_bdevs_discovered": 1, 00:21:02.956 "num_base_bdevs_operational": 3, 00:21:02.956 "base_bdevs_list": [ 00:21:02.956 { 00:21:02.956 "name": null, 00:21:02.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.956 "is_configured": false, 00:21:02.956 "data_offset": 2048, 00:21:02.956 "data_size": 63488 00:21:02.956 }, 00:21:02.956 { 00:21:02.956 "name": "pt2", 00:21:02.956 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:02.956 "is_configured": true, 00:21:02.956 "data_offset": 2048, 00:21:02.956 "data_size": 63488 00:21:02.956 }, 00:21:02.956 { 00:21:02.956 "name": null, 00:21:02.956 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:02.956 "is_configured": false, 00:21:02.956 "data_offset": 2048, 00:21:02.956 "data_size": 63488 00:21:02.956 }, 00:21:02.956 { 00:21:02.956 "name": null, 00:21:02.956 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:02.956 "is_configured": false, 00:21:02.956 "data_offset": 2048, 00:21:02.956 "data_size": 63488 00:21:02.956 } 00:21:02.956 ] 00:21:02.956 }' 00:21:02.956 05:03:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.956 05:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.519 05:03:33 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:21:03.519 05:03:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:03.519 05:03:33 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:03.776 [2024-04-27 05:03:33.611345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:03.776 [2024-04-27 05:03:33.611748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.777 [2024-04-27 05:03:33.611851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:03.777 [2024-04-27 05:03:33.612076] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.777 [2024-04-27 05:03:33.612713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.777 [2024-04-27 05:03:33.612903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:03.777 [2024-04-27 05:03:33.613168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:03.777 [2024-04-27 05:03:33.613306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:03.777 pt3 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.777 05:03:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.033 05:03:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.033 "name": "raid_bdev1", 00:21:04.033 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:04.033 "strip_size_kb": 0, 00:21:04.033 "state": "configuring", 00:21:04.033 "raid_level": "raid1", 00:21:04.033 "superblock": true, 00:21:04.033 "num_base_bdevs": 4, 00:21:04.033 "num_base_bdevs_discovered": 2, 00:21:04.033 "num_base_bdevs_operational": 3, 00:21:04.033 "base_bdevs_list": [ 00:21:04.033 { 00:21:04.033 "name": null, 00:21:04.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.033 "is_configured": false, 00:21:04.033 "data_offset": 2048, 00:21:04.033 "data_size": 63488 00:21:04.033 }, 00:21:04.033 { 00:21:04.033 "name": "pt2", 00:21:04.033 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:04.033 "is_configured": true, 00:21:04.033 "data_offset": 2048, 00:21:04.033 "data_size": 63488 00:21:04.033 }, 00:21:04.033 { 00:21:04.033 "name": "pt3", 00:21:04.033 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:04.033 "is_configured": true, 00:21:04.033 "data_offset": 2048, 00:21:04.033 "data_size": 63488 00:21:04.033 }, 00:21:04.033 { 00:21:04.033 "name": null, 00:21:04.033 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:04.033 "is_configured": false, 00:21:04.033 "data_offset": 2048, 00:21:04.033 "data_size": 63488 00:21:04.033 } 00:21:04.033 ] 00:21:04.033 }' 00:21:04.033 05:03:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.034 05:03:33 -- common/autotest_common.sh@10 -- # set +x 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@462 -- # i=3 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:04.966 [2024-04-27 05:03:34.839655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:04.966 [2024-04-27 05:03:34.840079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.966 [2024-04-27 05:03:34.840179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:04.966 [2024-04-27 05:03:34.840422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.966 [2024-04-27 05:03:34.841067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.966 [2024-04-27 05:03:34.841237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:04.966 [2024-04-27 05:03:34.841469] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:04.966 [2024-04-27 05:03:34.841609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:04.966 [2024-04-27 05:03:34.841896] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:21:04.966 [2024-04-27 05:03:34.842026] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:04.966 [2024-04-27 05:03:34.842206] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:04.966 [2024-04-27 05:03:34.842739] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:21:04.966 [2024-04-27 05:03:34.842894] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:21:04.966 [2024-04-27 05:03:34.843131] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.966 pt4 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.966 05:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.531 05:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.531 "name": "raid_bdev1", 00:21:05.531 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:05.531 "strip_size_kb": 0, 00:21:05.531 "state": "online", 00:21:05.531 "raid_level": "raid1", 00:21:05.531 "superblock": true, 00:21:05.531 "num_base_bdevs": 4, 00:21:05.531 "num_base_bdevs_discovered": 3, 00:21:05.531 "num_base_bdevs_operational": 3, 00:21:05.531 "base_bdevs_list": [ 00:21:05.531 { 00:21:05.531 "name": null, 00:21:05.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.531 "is_configured": false, 00:21:05.531 "data_offset": 2048, 00:21:05.531 "data_size": 63488 00:21:05.531 }, 00:21:05.531 { 00:21:05.531 "name": "pt2", 00:21:05.531 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:05.531 "is_configured": true, 00:21:05.531 "data_offset": 2048, 00:21:05.531 "data_size": 63488 00:21:05.531 }, 00:21:05.531 { 00:21:05.531 "name": "pt3", 00:21:05.531 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:05.531 "is_configured": true, 00:21:05.531 "data_offset": 2048, 00:21:05.531 "data_size": 63488 00:21:05.531 }, 00:21:05.531 { 00:21:05.531 "name": "pt4", 00:21:05.531 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:05.531 "is_configured": true, 00:21:05.531 "data_offset": 2048, 00:21:05.531 "data_size": 63488 00:21:05.531 } 00:21:05.531 ] 00:21:05.531 }' 00:21:05.531 05:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.531 05:03:35 -- common/autotest_common.sh@10 -- # set +x 00:21:06.096 05:03:35 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:21:06.096 05:03:35 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.354 [2024-04-27 05:03:36.027881] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.354 [2024-04-27 05:03:36.028215] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.354 [2024-04-27 05:03:36.028438] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.354 [2024-04-27 05:03:36.028690] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.354 [2024-04-27 05:03:36.028834] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:21:06.354 05:03:36 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.354 05:03:36 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:21:06.612 05:03:36 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:21:06.612 05:03:36 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:21:06.612 05:03:36 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:06.871 [2024-04-27 05:03:36.576045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:06.871 [2024-04-27 05:03:36.576462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.871 [2024-04-27 05:03:36.576679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:06.871 [2024-04-27 05:03:36.576828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.871 [2024-04-27 05:03:36.579701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.871 [2024-04-27 05:03:36.579909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:06.871 [2024-04-27 05:03:36.580158] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:06.871 [2024-04-27 05:03:36.580315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:06.871 pt1 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.871 05:03:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.130 05:03:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.130 "name": "raid_bdev1", 00:21:07.130 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:07.130 "strip_size_kb": 0, 00:21:07.130 "state": "configuring", 00:21:07.130 "raid_level": "raid1", 00:21:07.130 "superblock": true, 00:21:07.130 "num_base_bdevs": 4, 00:21:07.130 "num_base_bdevs_discovered": 1, 00:21:07.130 "num_base_bdevs_operational": 4, 00:21:07.130 "base_bdevs_list": [ 00:21:07.130 { 00:21:07.130 "name": "pt1", 00:21:07.130 "uuid": "9e4bf3b5-e98c-57d8-a024-43dd3c911df2", 00:21:07.130 "is_configured": true, 00:21:07.130 "data_offset": 2048, 00:21:07.130 "data_size": 63488 00:21:07.130 }, 00:21:07.130 { 00:21:07.130 "name": null, 00:21:07.130 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:07.130 "is_configured": false, 00:21:07.130 "data_offset": 2048, 00:21:07.130 "data_size": 63488 00:21:07.130 }, 00:21:07.130 { 00:21:07.130 "name": null, 00:21:07.130 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:07.130 "is_configured": false, 00:21:07.130 "data_offset": 2048, 00:21:07.130 "data_size": 63488 00:21:07.130 }, 00:21:07.130 { 00:21:07.130 "name": null, 00:21:07.130 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:07.130 "is_configured": false, 00:21:07.130 "data_offset": 2048, 00:21:07.130 "data_size": 63488 00:21:07.130 } 00:21:07.130 ] 00:21:07.130 }' 00:21:07.130 05:03:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.130 05:03:36 -- common/autotest_common.sh@10 -- # set +x 00:21:07.693 05:03:37 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:21:07.693 05:03:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:07.693 05:03:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.951 05:03:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:07.951 05:03:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:07.951 05:03:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:08.208 05:03:38 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:08.208 05:03:38 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:08.208 05:03:38 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:08.466 05:03:38 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:08.466 05:03:38 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:08.466 05:03:38 -- bdev/bdev_raid.sh@489 -- # i=3 00:21:08.466 05:03:38 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:08.724 [2024-04-27 05:03:38.536786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:08.724 [2024-04-27 05:03:38.537182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.724 [2024-04-27 05:03:38.537274] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:08.724 [2024-04-27 05:03:38.537509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.724 [2024-04-27 05:03:38.538110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.724 [2024-04-27 05:03:38.538298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:08.724 [2024-04-27 05:03:38.538562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:08.724 [2024-04-27 05:03:38.538685] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:08.724 [2024-04-27 05:03:38.538795] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.724 [2024-04-27 05:03:38.538871] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:21:08.724 [2024-04-27 05:03:38.539056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:08.724 pt4 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.724 05:03:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.725 05:03:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.725 05:03:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.725 05:03:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.983 05:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.983 "name": "raid_bdev1", 00:21:08.983 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:08.983 "strip_size_kb": 0, 00:21:08.983 "state": "configuring", 00:21:08.983 "raid_level": "raid1", 00:21:08.983 "superblock": true, 00:21:08.983 "num_base_bdevs": 4, 00:21:08.983 "num_base_bdevs_discovered": 1, 00:21:08.983 "num_base_bdevs_operational": 3, 00:21:08.983 "base_bdevs_list": [ 00:21:08.983 { 00:21:08.983 "name": null, 00:21:08.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.983 "is_configured": false, 00:21:08.983 "data_offset": 2048, 00:21:08.983 "data_size": 63488 00:21:08.983 }, 00:21:08.983 { 00:21:08.983 "name": null, 00:21:08.983 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:08.983 "is_configured": false, 00:21:08.983 "data_offset": 2048, 00:21:08.983 "data_size": 63488 00:21:08.983 }, 00:21:08.983 { 00:21:08.983 "name": null, 00:21:08.983 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:08.983 "is_configured": false, 00:21:08.983 "data_offset": 2048, 00:21:08.983 "data_size": 63488 00:21:08.983 }, 00:21:08.983 { 00:21:08.983 "name": "pt4", 00:21:08.983 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:08.983 "is_configured": true, 00:21:08.983 "data_offset": 2048, 00:21:08.983 "data_size": 63488 00:21:08.983 } 00:21:08.983 ] 00:21:08.983 }' 00:21:08.983 05:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.983 05:03:38 -- common/autotest_common.sh@10 -- # set +x 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.919 [2024-04-27 05:03:39.729083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.919 [2024-04-27 05:03:39.729546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.919 [2024-04-27 05:03:39.729737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:21:09.919 [2024-04-27 05:03:39.729880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.919 [2024-04-27 05:03:39.730580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.919 [2024-04-27 05:03:39.730781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.919 [2024-04-27 05:03:39.731038] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:09.919 [2024-04-27 05:03:39.731189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.919 pt2 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:09.919 05:03:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:10.177 [2024-04-27 05:03:39.993173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:10.177 [2024-04-27 05:03:39.993460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.177 [2024-04-27 05:03:39.993635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:21:10.177 [2024-04-27 05:03:39.993778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.177 [2024-04-27 05:03:39.994452] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.177 [2024-04-27 05:03:39.994649] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:10.177 [2024-04-27 05:03:39.994915] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:10.177 [2024-04-27 05:03:39.995072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:10.177 [2024-04-27 05:03:39.995301] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:21:10.177 [2024-04-27 05:03:39.995420] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.177 [2024-04-27 05:03:39.995574] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:10.177 [2024-04-27 05:03:39.996096] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:21:10.177 [2024-04-27 05:03:39.996225] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:21:10.177 [2024-04-27 05:03:39.996461] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.177 pt3 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.177 05:03:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.434 05:03:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.434 "name": "raid_bdev1", 00:21:10.434 "uuid": "5a73ad88-9640-473a-b659-f2c54727f9fc", 00:21:10.434 "strip_size_kb": 0, 00:21:10.434 "state": "online", 00:21:10.434 "raid_level": "raid1", 00:21:10.434 "superblock": true, 00:21:10.434 "num_base_bdevs": 4, 00:21:10.434 "num_base_bdevs_discovered": 3, 00:21:10.434 "num_base_bdevs_operational": 3, 00:21:10.434 "base_bdevs_list": [ 00:21:10.434 { 00:21:10.434 "name": null, 00:21:10.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.434 "is_configured": false, 00:21:10.434 "data_offset": 2048, 00:21:10.434 "data_size": 63488 00:21:10.434 }, 00:21:10.434 { 00:21:10.434 "name": "pt2", 00:21:10.434 "uuid": "66e5182b-728e-5f2f-8fc0-103140b6ff56", 00:21:10.434 "is_configured": true, 00:21:10.434 "data_offset": 2048, 00:21:10.434 "data_size": 63488 00:21:10.434 }, 00:21:10.434 { 00:21:10.434 "name": "pt3", 00:21:10.434 "uuid": "cb1909d8-7ae6-5de7-8174-3cd0114164f8", 00:21:10.434 "is_configured": true, 00:21:10.434 "data_offset": 2048, 00:21:10.434 "data_size": 63488 00:21:10.434 }, 00:21:10.434 { 00:21:10.434 "name": "pt4", 00:21:10.434 "uuid": "1495c1fe-9c07-5cdc-be1d-657c0c0a255b", 00:21:10.434 "is_configured": true, 00:21:10.434 "data_offset": 2048, 00:21:10.434 "data_size": 63488 00:21:10.434 } 00:21:10.434 ] 00:21:10.434 }' 00:21:10.434 05:03:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.434 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:21:11.365 05:03:40 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.365 05:03:40 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:21:11.365 [2024-04-27 05:03:41.165734] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.365 05:03:41 -- bdev/bdev_raid.sh@506 -- # '[' 5a73ad88-9640-473a-b659-f2c54727f9fc '!=' 5a73ad88-9640-473a-b659-f2c54727f9fc ']' 00:21:11.365 05:03:41 -- bdev/bdev_raid.sh@511 -- # killprocess 134040 00:21:11.365 05:03:41 -- common/autotest_common.sh@926 -- # '[' -z 134040 ']' 00:21:11.365 05:03:41 -- common/autotest_common.sh@930 -- # kill -0 134040 00:21:11.365 05:03:41 -- common/autotest_common.sh@931 -- # uname 00:21:11.365 05:03:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.365 05:03:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134040 00:21:11.365 05:03:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:11.365 05:03:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:11.365 05:03:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134040' 00:21:11.365 killing process with pid 134040 00:21:11.365 05:03:41 -- common/autotest_common.sh@945 -- # kill 134040 00:21:11.365 05:03:41 -- common/autotest_common.sh@950 -- # wait 134040 00:21:11.365 [2024-04-27 05:03:41.215973] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.365 [2024-04-27 05:03:41.216097] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.365 [2024-04-27 05:03:41.216199] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.365 [2024-04-27 05:03:41.216370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:21:11.623 [2024-04-27 05:03:41.324983] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.882 05:03:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:11.882 00:21:11.882 real 0m23.500s 00:21:11.882 user 0m43.694s 00:21:11.882 sys 0m3.102s 00:21:11.882 05:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.882 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:21:11.882 ************************************ 00:21:11.882 END TEST raid_superblock_test 00:21:11.882 ************************************ 00:21:11.882 05:03:41 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:21:11.882 05:03:41 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:11.882 05:03:41 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:21:11.882 05:03:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:11.882 05:03:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:11.882 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:21:12.140 ************************************ 00:21:12.140 START TEST raid_rebuild_test 00:21:12.140 ************************************ 00:21:12.140 05:03:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=134732 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134732 /var/tmp/spdk-raid.sock 00:21:12.140 05:03:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.140 05:03:41 -- common/autotest_common.sh@819 -- # '[' -z 134732 ']' 00:21:12.140 05:03:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:12.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:12.140 05:03:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.140 05:03:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:12.140 05:03:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.140 05:03:41 -- common/autotest_common.sh@10 -- # set +x 00:21:12.140 [2024-04-27 05:03:41.872695] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:12.140 [2024-04-27 05:03:41.873211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134732 ] 00:21:12.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:12.140 Zero copy mechanism will not be used. 00:21:12.398 [2024-04-27 05:03:42.046220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.398 [2024-04-27 05:03:42.183844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.398 [2024-04-27 05:03:42.301111] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.964 05:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.964 05:03:42 -- common/autotest_common.sh@852 -- # return 0 00:21:12.964 05:03:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:12.964 05:03:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:12.964 05:03:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:13.221 BaseBdev1 00:21:13.221 05:03:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:13.221 05:03:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:13.221 05:03:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.480 BaseBdev2 00:21:13.480 05:03:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:13.739 spare_malloc 00:21:13.739 05:03:43 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:13.996 spare_delay 00:21:13.996 05:03:43 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:14.254 [2024-04-27 05:03:44.080918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.254 [2024-04-27 05:03:44.081945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.254 [2024-04-27 05:03:44.082282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:14.254 [2024-04-27 05:03:44.082603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.254 [2024-04-27 05:03:44.085994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.254 [2024-04-27 05:03:44.086301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.254 spare 00:21:14.254 05:03:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:14.512 [2024-04-27 05:03:44.346999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.512 [2024-04-27 05:03:44.349742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.512 [2024-04-27 05:03:44.349984] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:14.512 [2024-04-27 05:03:44.350039] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:14.512 [2024-04-27 05:03:44.350368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:14.512 [2024-04-27 05:03:44.350982] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:14.512 [2024-04-27 05:03:44.351114] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:14.512 [2024-04-27 05:03:44.351515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.512 05:03:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.769 05:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.769 "name": "raid_bdev1", 00:21:14.769 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:14.769 "strip_size_kb": 0, 00:21:14.769 "state": "online", 00:21:14.769 "raid_level": "raid1", 00:21:14.769 "superblock": false, 00:21:14.769 "num_base_bdevs": 2, 00:21:14.769 "num_base_bdevs_discovered": 2, 00:21:14.769 "num_base_bdevs_operational": 2, 00:21:14.769 "base_bdevs_list": [ 00:21:14.769 { 00:21:14.769 "name": "BaseBdev1", 00:21:14.769 "uuid": "3b1ff89d-1a34-49c3-adcd-6163c19a5dcb", 00:21:14.769 "is_configured": true, 00:21:14.769 "data_offset": 0, 00:21:14.769 "data_size": 65536 00:21:14.769 }, 00:21:14.769 { 00:21:14.769 "name": "BaseBdev2", 00:21:14.769 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:14.769 "is_configured": true, 00:21:14.769 "data_offset": 0, 00:21:14.769 "data_size": 65536 00:21:14.769 } 00:21:14.769 ] 00:21:14.769 }' 00:21:14.769 05:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.769 05:03:44 -- common/autotest_common.sh@10 -- # set +x 00:21:15.702 05:03:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:15.702 05:03:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:15.702 [2024-04-27 05:03:45.532093] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.702 05:03:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:15.702 05:03:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.702 05:03:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:15.960 05:03:45 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:15.960 05:03:45 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:15.960 05:03:45 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:15.960 05:03:45 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@12 -- # local i 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.960 05:03:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:16.218 [2024-04-27 05:03:46.112041] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:16.476 /dev/nbd0 00:21:16.476 05:03:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:16.476 05:03:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:16.476 05:03:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:16.476 05:03:46 -- common/autotest_common.sh@857 -- # local i 00:21:16.476 05:03:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:16.476 05:03:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:16.476 05:03:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:16.476 05:03:46 -- common/autotest_common.sh@861 -- # break 00:21:16.476 05:03:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:16.476 05:03:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:16.476 05:03:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.476 1+0 records in 00:21:16.476 1+0 records out 00:21:16.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647808 s, 6.3 MB/s 00:21:16.476 05:03:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.476 05:03:46 -- common/autotest_common.sh@874 -- # size=4096 00:21:16.476 05:03:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.476 05:03:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:16.476 05:03:46 -- common/autotest_common.sh@877 -- # return 0 00:21:16.476 05:03:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:16.476 05:03:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:16.476 05:03:46 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:16.476 05:03:46 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:16.476 05:03:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:21.738 65536+0 records in 00:21:21.738 65536+0 records out 00:21:21.738 33554432 bytes (34 MB, 32 MiB) copied, 5.18999 s, 6.5 MB/s 00:21:21.738 05:03:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@51 -- # local i 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.738 05:03:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:21.995 [2024-04-27 05:03:51.649986] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@41 -- # break 00:21:21.995 05:03:51 -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.995 05:03:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:21.995 [2024-04-27 05:03:51.897050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.253 05:03:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.512 05:03:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.512 "name": "raid_bdev1", 00:21:22.512 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:22.512 "strip_size_kb": 0, 00:21:22.512 "state": "online", 00:21:22.512 "raid_level": "raid1", 00:21:22.512 "superblock": false, 00:21:22.512 "num_base_bdevs": 2, 00:21:22.512 "num_base_bdevs_discovered": 1, 00:21:22.512 "num_base_bdevs_operational": 1, 00:21:22.512 "base_bdevs_list": [ 00:21:22.512 { 00:21:22.512 "name": null, 00:21:22.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.512 "is_configured": false, 00:21:22.512 "data_offset": 0, 00:21:22.512 "data_size": 65536 00:21:22.512 }, 00:21:22.512 { 00:21:22.512 "name": "BaseBdev2", 00:21:22.512 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:22.512 "is_configured": true, 00:21:22.512 "data_offset": 0, 00:21:22.512 "data_size": 65536 00:21:22.512 } 00:21:22.512 ] 00:21:22.512 }' 00:21:22.512 05:03:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.512 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:21:23.078 05:03:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:23.337 [2024-04-27 05:03:53.057455] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:23.337 [2024-04-27 05:03:53.057860] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.337 [2024-04-27 05:03:53.066092] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:21:23.337 [2024-04-27 05:03:53.069080] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.337 05:03:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.271 05:03:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.529 05:03:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.529 "name": "raid_bdev1", 00:21:24.529 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:24.529 "strip_size_kb": 0, 00:21:24.529 "state": "online", 00:21:24.529 "raid_level": "raid1", 00:21:24.529 "superblock": false, 00:21:24.529 "num_base_bdevs": 2, 00:21:24.529 "num_base_bdevs_discovered": 2, 00:21:24.529 "num_base_bdevs_operational": 2, 00:21:24.529 "process": { 00:21:24.529 "type": "rebuild", 00:21:24.529 "target": "spare", 00:21:24.529 "progress": { 00:21:24.529 "blocks": 24576, 00:21:24.529 "percent": 37 00:21:24.529 } 00:21:24.529 }, 00:21:24.529 "base_bdevs_list": [ 00:21:24.529 { 00:21:24.529 "name": "spare", 00:21:24.529 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:24.529 "is_configured": true, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 65536 00:21:24.529 }, 00:21:24.529 { 00:21:24.529 "name": "BaseBdev2", 00:21:24.529 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:24.529 "is_configured": true, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 65536 00:21:24.529 } 00:21:24.529 ] 00:21:24.529 }' 00:21:24.530 05:03:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.530 05:03:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.530 05:03:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.788 05:03:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.788 05:03:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:25.046 [2024-04-27 05:03:54.725339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.046 [2024-04-27 05:03:54.787967] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:25.046 [2024-04-27 05:03:54.789033] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.046 05:03:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.304 05:03:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.304 "name": "raid_bdev1", 00:21:25.304 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:25.304 "strip_size_kb": 0, 00:21:25.304 "state": "online", 00:21:25.304 "raid_level": "raid1", 00:21:25.304 "superblock": false, 00:21:25.304 "num_base_bdevs": 2, 00:21:25.304 "num_base_bdevs_discovered": 1, 00:21:25.304 "num_base_bdevs_operational": 1, 00:21:25.304 "base_bdevs_list": [ 00:21:25.304 { 00:21:25.304 "name": null, 00:21:25.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.304 "is_configured": false, 00:21:25.304 "data_offset": 0, 00:21:25.304 "data_size": 65536 00:21:25.304 }, 00:21:25.304 { 00:21:25.304 "name": "BaseBdev2", 00:21:25.304 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:25.304 "is_configured": true, 00:21:25.304 "data_offset": 0, 00:21:25.304 "data_size": 65536 00:21:25.304 } 00:21:25.304 ] 00:21:25.304 }' 00:21:25.304 05:03:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.304 05:03:55 -- common/autotest_common.sh@10 -- # set +x 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.936 05:03:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.195 05:03:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.195 "name": "raid_bdev1", 00:21:26.195 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:26.195 "strip_size_kb": 0, 00:21:26.195 "state": "online", 00:21:26.195 "raid_level": "raid1", 00:21:26.195 "superblock": false, 00:21:26.195 "num_base_bdevs": 2, 00:21:26.195 "num_base_bdevs_discovered": 1, 00:21:26.195 "num_base_bdevs_operational": 1, 00:21:26.195 "base_bdevs_list": [ 00:21:26.195 { 00:21:26.195 "name": null, 00:21:26.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.195 "is_configured": false, 00:21:26.195 "data_offset": 0, 00:21:26.195 "data_size": 65536 00:21:26.195 }, 00:21:26.195 { 00:21:26.195 "name": "BaseBdev2", 00:21:26.195 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:26.195 "is_configured": true, 00:21:26.195 "data_offset": 0, 00:21:26.195 "data_size": 65536 00:21:26.195 } 00:21:26.195 ] 00:21:26.195 }' 00:21:26.195 05:03:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.195 05:03:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:26.195 05:03:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.195 05:03:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.195 05:03:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.453 [2024-04-27 05:03:56.342383] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:26.453 [2024-04-27 05:03:56.342773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.453 [2024-04-27 05:03:56.350585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:21:26.453 [2024-04-27 05:03:56.353433] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:26.711 05:03:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.643 05:03:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.902 "name": "raid_bdev1", 00:21:27.902 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:27.902 "strip_size_kb": 0, 00:21:27.902 "state": "online", 00:21:27.902 "raid_level": "raid1", 00:21:27.902 "superblock": false, 00:21:27.902 "num_base_bdevs": 2, 00:21:27.902 "num_base_bdevs_discovered": 2, 00:21:27.902 "num_base_bdevs_operational": 2, 00:21:27.902 "process": { 00:21:27.902 "type": "rebuild", 00:21:27.902 "target": "spare", 00:21:27.902 "progress": { 00:21:27.902 "blocks": 24576, 00:21:27.902 "percent": 37 00:21:27.902 } 00:21:27.902 }, 00:21:27.902 "base_bdevs_list": [ 00:21:27.902 { 00:21:27.902 "name": "spare", 00:21:27.902 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:27.902 "is_configured": true, 00:21:27.902 "data_offset": 0, 00:21:27.902 "data_size": 65536 00:21:27.902 }, 00:21:27.902 { 00:21:27.902 "name": "BaseBdev2", 00:21:27.902 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:27.902 "is_configured": true, 00:21:27.902 "data_offset": 0, 00:21:27.902 "data_size": 65536 00:21:27.902 } 00:21:27.902 ] 00:21:27.902 }' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@657 -- # local timeout=403 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.902 05:03:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.161 05:03:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.161 "name": "raid_bdev1", 00:21:28.161 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:28.161 "strip_size_kb": 0, 00:21:28.161 "state": "online", 00:21:28.161 "raid_level": "raid1", 00:21:28.161 "superblock": false, 00:21:28.161 "num_base_bdevs": 2, 00:21:28.161 "num_base_bdevs_discovered": 2, 00:21:28.161 "num_base_bdevs_operational": 2, 00:21:28.161 "process": { 00:21:28.161 "type": "rebuild", 00:21:28.161 "target": "spare", 00:21:28.161 "progress": { 00:21:28.161 "blocks": 32768, 00:21:28.161 "percent": 50 00:21:28.161 } 00:21:28.161 }, 00:21:28.161 "base_bdevs_list": [ 00:21:28.161 { 00:21:28.161 "name": "spare", 00:21:28.161 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:28.161 "is_configured": true, 00:21:28.161 "data_offset": 0, 00:21:28.161 "data_size": 65536 00:21:28.161 }, 00:21:28.161 { 00:21:28.161 "name": "BaseBdev2", 00:21:28.161 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:28.161 "is_configured": true, 00:21:28.161 "data_offset": 0, 00:21:28.161 "data_size": 65536 00:21:28.161 } 00:21:28.161 ] 00:21:28.161 }' 00:21:28.161 05:03:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.419 05:03:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.419 05:03:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.419 05:03:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.419 05:03:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.354 05:03:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.614 05:03:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.614 "name": "raid_bdev1", 00:21:29.614 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:29.614 "strip_size_kb": 0, 00:21:29.614 "state": "online", 00:21:29.614 "raid_level": "raid1", 00:21:29.614 "superblock": false, 00:21:29.614 "num_base_bdevs": 2, 00:21:29.614 "num_base_bdevs_discovered": 2, 00:21:29.614 "num_base_bdevs_operational": 2, 00:21:29.614 "process": { 00:21:29.614 "type": "rebuild", 00:21:29.614 "target": "spare", 00:21:29.614 "progress": { 00:21:29.614 "blocks": 61440, 00:21:29.614 "percent": 93 00:21:29.614 } 00:21:29.614 }, 00:21:29.614 "base_bdevs_list": [ 00:21:29.614 { 00:21:29.614 "name": "spare", 00:21:29.614 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:29.614 "is_configured": true, 00:21:29.614 "data_offset": 0, 00:21:29.614 "data_size": 65536 00:21:29.614 }, 00:21:29.614 { 00:21:29.614 "name": "BaseBdev2", 00:21:29.614 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:29.614 "is_configured": true, 00:21:29.614 "data_offset": 0, 00:21:29.614 "data_size": 65536 00:21:29.614 } 00:21:29.614 ] 00:21:29.614 }' 00:21:29.614 05:03:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.614 05:03:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.614 05:03:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.872 05:03:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.872 05:03:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:29.872 [2024-04-27 05:03:59.583995] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:29.872 [2024-04-27 05:03:59.584431] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:29.872 [2024-04-27 05:03:59.585266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.807 05:04:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.066 "name": "raid_bdev1", 00:21:31.066 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:31.066 "strip_size_kb": 0, 00:21:31.066 "state": "online", 00:21:31.066 "raid_level": "raid1", 00:21:31.066 "superblock": false, 00:21:31.066 "num_base_bdevs": 2, 00:21:31.066 "num_base_bdevs_discovered": 2, 00:21:31.066 "num_base_bdevs_operational": 2, 00:21:31.066 "base_bdevs_list": [ 00:21:31.066 { 00:21:31.066 "name": "spare", 00:21:31.066 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:31.066 "is_configured": true, 00:21:31.066 "data_offset": 0, 00:21:31.066 "data_size": 65536 00:21:31.066 }, 00:21:31.066 { 00:21:31.066 "name": "BaseBdev2", 00:21:31.066 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:31.066 "is_configured": true, 00:21:31.066 "data_offset": 0, 00:21:31.066 "data_size": 65536 00:21:31.066 } 00:21:31.066 ] 00:21:31.066 }' 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@660 -- # break 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:31.066 05:04:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.067 05:04:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.067 05:04:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.325 05:04:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.325 "name": "raid_bdev1", 00:21:31.325 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:31.325 "strip_size_kb": 0, 00:21:31.325 "state": "online", 00:21:31.325 "raid_level": "raid1", 00:21:31.325 "superblock": false, 00:21:31.325 "num_base_bdevs": 2, 00:21:31.325 "num_base_bdevs_discovered": 2, 00:21:31.325 "num_base_bdevs_operational": 2, 00:21:31.325 "base_bdevs_list": [ 00:21:31.325 { 00:21:31.325 "name": "spare", 00:21:31.325 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:31.325 "is_configured": true, 00:21:31.325 "data_offset": 0, 00:21:31.325 "data_size": 65536 00:21:31.325 }, 00:21:31.325 { 00:21:31.325 "name": "BaseBdev2", 00:21:31.325 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:31.325 "is_configured": true, 00:21:31.325 "data_offset": 0, 00:21:31.325 "data_size": 65536 00:21:31.325 } 00:21:31.325 ] 00:21:31.325 }' 00:21:31.325 05:04:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.584 05:04:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.842 05:04:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.842 "name": "raid_bdev1", 00:21:31.842 "uuid": "54357727-94f5-4341-b66f-c07b8e3b53d1", 00:21:31.842 "strip_size_kb": 0, 00:21:31.842 "state": "online", 00:21:31.842 "raid_level": "raid1", 00:21:31.842 "superblock": false, 00:21:31.842 "num_base_bdevs": 2, 00:21:31.842 "num_base_bdevs_discovered": 2, 00:21:31.842 "num_base_bdevs_operational": 2, 00:21:31.842 "base_bdevs_list": [ 00:21:31.842 { 00:21:31.842 "name": "spare", 00:21:31.842 "uuid": "a36d3d39-6637-595d-83ce-6871115a4426", 00:21:31.842 "is_configured": true, 00:21:31.842 "data_offset": 0, 00:21:31.842 "data_size": 65536 00:21:31.842 }, 00:21:31.842 { 00:21:31.842 "name": "BaseBdev2", 00:21:31.842 "uuid": "607eb3a7-5889-42ec-a9c0-45880c3f5972", 00:21:31.843 "is_configured": true, 00:21:31.843 "data_offset": 0, 00:21:31.843 "data_size": 65536 00:21:31.843 } 00:21:31.843 ] 00:21:31.843 }' 00:21:31.843 05:04:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.843 05:04:01 -- common/autotest_common.sh@10 -- # set +x 00:21:32.410 05:04:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:32.669 [2024-04-27 05:04:02.451125] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.669 [2024-04-27 05:04:02.451454] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.669 [2024-04-27 05:04:02.451719] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.669 [2024-04-27 05:04:02.451962] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.669 [2024-04-27 05:04:02.452092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:32.669 05:04:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:32.669 05:04:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.930 05:04:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:32.930 05:04:02 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:32.930 05:04:02 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:32.930 05:04:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:33.189 /dev/nbd0 00:21:33.189 05:04:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:33.189 05:04:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:33.189 05:04:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:33.189 05:04:03 -- common/autotest_common.sh@857 -- # local i 00:21:33.189 05:04:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:33.189 05:04:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:33.189 05:04:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:33.189 05:04:03 -- common/autotest_common.sh@861 -- # break 00:21:33.189 05:04:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:33.189 05:04:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:33.189 05:04:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:33.189 1+0 records in 00:21:33.189 1+0 records out 00:21:33.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121212 s, 3.4 MB/s 00:21:33.189 05:04:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.189 05:04:03 -- common/autotest_common.sh@874 -- # size=4096 00:21:33.189 05:04:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.189 05:04:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:33.189 05:04:03 -- common/autotest_common.sh@877 -- # return 0 00:21:33.189 05:04:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:33.189 05:04:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:33.189 05:04:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:33.447 /dev/nbd1 00:21:33.447 05:04:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:33.447 05:04:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:33.447 05:04:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:33.447 05:04:03 -- common/autotest_common.sh@857 -- # local i 00:21:33.447 05:04:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:33.447 05:04:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:33.447 05:04:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:33.447 05:04:03 -- common/autotest_common.sh@861 -- # break 00:21:33.447 05:04:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:33.447 05:04:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:33.447 05:04:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:33.704 1+0 records in 00:21:33.704 1+0 records out 00:21:33.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010598 s, 3.9 MB/s 00:21:33.704 05:04:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.704 05:04:03 -- common/autotest_common.sh@874 -- # size=4096 00:21:33.704 05:04:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.704 05:04:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:33.704 05:04:03 -- common/autotest_common.sh@877 -- # return 0 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:33.704 05:04:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:33.704 05:04:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@51 -- # local i 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.704 05:04:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@41 -- # break 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.962 05:04:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:34.220 05:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:34.220 05:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:34.220 05:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:34.221 05:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:34.221 05:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:34.221 05:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:34.221 05:04:04 -- bdev/nbd_common.sh@41 -- # break 00:21:34.221 05:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:21:34.221 05:04:04 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:34.221 05:04:04 -- bdev/bdev_raid.sh@709 -- # killprocess 134732 00:21:34.221 05:04:04 -- common/autotest_common.sh@926 -- # '[' -z 134732 ']' 00:21:34.221 05:04:04 -- common/autotest_common.sh@930 -- # kill -0 134732 00:21:34.221 05:04:04 -- common/autotest_common.sh@931 -- # uname 00:21:34.221 05:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:34.221 05:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134732 00:21:34.221 05:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:34.221 05:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:34.221 05:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134732' 00:21:34.221 killing process with pid 134732 00:21:34.221 05:04:04 -- common/autotest_common.sh@945 -- # kill 134732 00:21:34.221 Received shutdown signal, test time was about 60.000000 seconds 00:21:34.221 00:21:34.221 Latency(us) 00:21:34.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.221 =================================================================================================================== 00:21:34.221 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.221 05:04:04 -- common/autotest_common.sh@950 -- # wait 134732 00:21:34.221 [2024-04-27 05:04:04.075670] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.479 [2024-04-27 05:04:04.133029] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.737 05:04:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:34.737 00:21:34.737 real 0m22.697s 00:21:34.737 user 0m31.568s 00:21:34.737 sys 0m4.506s 00:21:34.737 05:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.737 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.737 ************************************ 00:21:34.737 END TEST raid_rebuild_test 00:21:34.737 ************************************ 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:21:34.738 05:04:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:34.738 05:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:34.738 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.738 ************************************ 00:21:34.738 START TEST raid_rebuild_test_sb 00:21:34.738 ************************************ 00:21:34.738 05:04:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=135287 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:34.738 05:04:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135287 /var/tmp/spdk-raid.sock 00:21:34.738 05:04:04 -- common/autotest_common.sh@819 -- # '[' -z 135287 ']' 00:21:34.738 05:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:34.738 05:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:34.738 05:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:34.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:34.738 05:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:34.738 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.738 [2024-04-27 05:04:04.641202] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:34.738 [2024-04-27 05:04:04.641763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135287 ] 00:21:34.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:34.738 Zero copy mechanism will not be used. 00:21:34.997 [2024-04-27 05:04:04.812158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.255 [2024-04-27 05:04:04.934102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.255 [2024-04-27 05:04:05.011866] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.822 05:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:35.822 05:04:05 -- common/autotest_common.sh@852 -- # return 0 00:21:35.822 05:04:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:35.822 05:04:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:35.822 05:04:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:36.081 BaseBdev1_malloc 00:21:36.081 05:04:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:36.341 [2024-04-27 05:04:06.125569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:36.341 [2024-04-27 05:04:06.126547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.341 [2024-04-27 05:04:06.126899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:36.341 [2024-04-27 05:04:06.127311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.341 [2024-04-27 05:04:06.130656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.341 [2024-04-27 05:04:06.130989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.341 BaseBdev1 00:21:36.341 05:04:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:36.341 05:04:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:36.341 05:04:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:36.600 BaseBdev2_malloc 00:21:36.600 05:04:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:36.907 [2024-04-27 05:04:06.646671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:36.907 [2024-04-27 05:04:06.648341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.907 [2024-04-27 05:04:06.648688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:36.907 [2024-04-27 05:04:06.649016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.907 [2024-04-27 05:04:06.652058] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.907 [2024-04-27 05:04:06.652342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:36.907 BaseBdev2 00:21:36.907 05:04:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:37.167 spare_malloc 00:21:37.167 05:04:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:37.426 spare_delay 00:21:37.426 05:04:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:37.685 [2024-04-27 05:04:07.440936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:37.685 [2024-04-27 05:04:07.441823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.685 [2024-04-27 05:04:07.442143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:37.685 [2024-04-27 05:04:07.442454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.685 [2024-04-27 05:04:07.445631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.685 [2024-04-27 05:04:07.445949] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:37.685 spare 00:21:37.685 05:04:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:37.945 [2024-04-27 05:04:07.714624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.945 [2024-04-27 05:04:07.717493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.945 [2024-04-27 05:04:07.717940] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:37.945 [2024-04-27 05:04:07.718083] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:37.945 [2024-04-27 05:04:07.718309] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:37.945 [2024-04-27 05:04:07.718915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:37.945 [2024-04-27 05:04:07.719047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:37.945 [2024-04-27 05:04:07.719416] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.945 05:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.203 05:04:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.203 "name": "raid_bdev1", 00:21:38.203 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:38.203 "strip_size_kb": 0, 00:21:38.203 "state": "online", 00:21:38.203 "raid_level": "raid1", 00:21:38.203 "superblock": true, 00:21:38.203 "num_base_bdevs": 2, 00:21:38.203 "num_base_bdevs_discovered": 2, 00:21:38.203 "num_base_bdevs_operational": 2, 00:21:38.203 "base_bdevs_list": [ 00:21:38.203 { 00:21:38.203 "name": "BaseBdev1", 00:21:38.203 "uuid": "92a6d75f-fd40-5bca-9c15-53de71d3ad53", 00:21:38.203 "is_configured": true, 00:21:38.203 "data_offset": 2048, 00:21:38.203 "data_size": 63488 00:21:38.203 }, 00:21:38.203 { 00:21:38.203 "name": "BaseBdev2", 00:21:38.203 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:38.203 "is_configured": true, 00:21:38.203 "data_offset": 2048, 00:21:38.203 "data_size": 63488 00:21:38.203 } 00:21:38.203 ] 00:21:38.203 }' 00:21:38.203 05:04:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.203 05:04:07 -- common/autotest_common.sh@10 -- # set +x 00:21:38.782 05:04:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:38.782 05:04:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:39.040 [2024-04-27 05:04:08.883999] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.040 05:04:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:39.040 05:04:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:39.040 05:04:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.298 05:04:09 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:39.298 05:04:09 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:39.298 05:04:09 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:39.298 05:04:09 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.298 05:04:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:39.557 [2024-04-27 05:04:09.407867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:39.557 /dev/nbd0 00:21:39.557 05:04:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.557 05:04:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.557 05:04:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:39.557 05:04:09 -- common/autotest_common.sh@857 -- # local i 00:21:39.557 05:04:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:39.557 05:04:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:39.557 05:04:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:39.557 05:04:09 -- common/autotest_common.sh@861 -- # break 00:21:39.557 05:04:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:39.557 05:04:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:39.557 05:04:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.815 1+0 records in 00:21:39.815 1+0 records out 00:21:39.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425434 s, 9.6 MB/s 00:21:39.815 05:04:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.815 05:04:09 -- common/autotest_common.sh@874 -- # size=4096 00:21:39.815 05:04:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.815 05:04:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:39.815 05:04:09 -- common/autotest_common.sh@877 -- # return 0 00:21:39.815 05:04:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.815 05:04:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.815 05:04:09 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:39.815 05:04:09 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:39.815 05:04:09 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:45.082 63488+0 records in 00:21:45.082 63488+0 records out 00:21:45.082 32505856 bytes (33 MB, 31 MiB) copied, 5.0068 s, 6.5 MB/s 00:21:45.082 05:04:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:45.082 05:04:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:45.082 05:04:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@51 -- # local i 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:45.083 [2024-04-27 05:04:14.764695] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@41 -- # break 00:21:45.083 05:04:14 -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.083 05:04:14 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:45.340 [2024-04-27 05:04:15.024350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.340 05:04:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.598 05:04:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.598 "name": "raid_bdev1", 00:21:45.598 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:45.598 "strip_size_kb": 0, 00:21:45.598 "state": "online", 00:21:45.598 "raid_level": "raid1", 00:21:45.598 "superblock": true, 00:21:45.598 "num_base_bdevs": 2, 00:21:45.598 "num_base_bdevs_discovered": 1, 00:21:45.598 "num_base_bdevs_operational": 1, 00:21:45.598 "base_bdevs_list": [ 00:21:45.598 { 00:21:45.598 "name": null, 00:21:45.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.598 "is_configured": false, 00:21:45.598 "data_offset": 2048, 00:21:45.598 "data_size": 63488 00:21:45.598 }, 00:21:45.598 { 00:21:45.598 "name": "BaseBdev2", 00:21:45.598 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:45.598 "is_configured": true, 00:21:45.598 "data_offset": 2048, 00:21:45.598 "data_size": 63488 00:21:45.598 } 00:21:45.598 ] 00:21:45.598 }' 00:21:45.598 05:04:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.598 05:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:46.165 05:04:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.423 [2024-04-27 05:04:16.168672] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:46.424 [2024-04-27 05:04:16.169051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.424 [2024-04-27 05:04:16.176457] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:21:46.424 [2024-04-27 05:04:16.179383] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.424 05:04:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.360 05:04:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.618 05:04:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:47.618 "name": "raid_bdev1", 00:21:47.618 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:47.618 "strip_size_kb": 0, 00:21:47.618 "state": "online", 00:21:47.618 "raid_level": "raid1", 00:21:47.618 "superblock": true, 00:21:47.618 "num_base_bdevs": 2, 00:21:47.618 "num_base_bdevs_discovered": 2, 00:21:47.618 "num_base_bdevs_operational": 2, 00:21:47.618 "process": { 00:21:47.618 "type": "rebuild", 00:21:47.619 "target": "spare", 00:21:47.619 "progress": { 00:21:47.619 "blocks": 24576, 00:21:47.619 "percent": 38 00:21:47.619 } 00:21:47.619 }, 00:21:47.619 "base_bdevs_list": [ 00:21:47.619 { 00:21:47.619 "name": "spare", 00:21:47.619 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:47.619 "is_configured": true, 00:21:47.619 "data_offset": 2048, 00:21:47.619 "data_size": 63488 00:21:47.619 }, 00:21:47.619 { 00:21:47.619 "name": "BaseBdev2", 00:21:47.619 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:47.619 "is_configured": true, 00:21:47.619 "data_offset": 2048, 00:21:47.619 "data_size": 63488 00:21:47.619 } 00:21:47.619 ] 00:21:47.619 }' 00:21:47.619 05:04:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:47.877 05:04:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.877 05:04:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:47.877 05:04:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.877 05:04:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:48.135 [2024-04-27 05:04:17.798122] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.135 [2024-04-27 05:04:17.895959] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:48.135 [2024-04-27 05:04:17.897034] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.135 05:04:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.393 05:04:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.393 "name": "raid_bdev1", 00:21:48.393 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:48.393 "strip_size_kb": 0, 00:21:48.393 "state": "online", 00:21:48.393 "raid_level": "raid1", 00:21:48.393 "superblock": true, 00:21:48.393 "num_base_bdevs": 2, 00:21:48.393 "num_base_bdevs_discovered": 1, 00:21:48.393 "num_base_bdevs_operational": 1, 00:21:48.393 "base_bdevs_list": [ 00:21:48.393 { 00:21:48.393 "name": null, 00:21:48.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.394 "is_configured": false, 00:21:48.394 "data_offset": 2048, 00:21:48.394 "data_size": 63488 00:21:48.394 }, 00:21:48.394 { 00:21:48.394 "name": "BaseBdev2", 00:21:48.394 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:48.394 "is_configured": true, 00:21:48.394 "data_offset": 2048, 00:21:48.394 "data_size": 63488 00:21:48.394 } 00:21:48.394 ] 00:21:48.394 }' 00:21:48.394 05:04:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.394 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.981 05:04:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.238 05:04:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.238 "name": "raid_bdev1", 00:21:49.238 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:49.238 "strip_size_kb": 0, 00:21:49.238 "state": "online", 00:21:49.238 "raid_level": "raid1", 00:21:49.238 "superblock": true, 00:21:49.238 "num_base_bdevs": 2, 00:21:49.238 "num_base_bdevs_discovered": 1, 00:21:49.238 "num_base_bdevs_operational": 1, 00:21:49.238 "base_bdevs_list": [ 00:21:49.238 { 00:21:49.238 "name": null, 00:21:49.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.238 "is_configured": false, 00:21:49.238 "data_offset": 2048, 00:21:49.238 "data_size": 63488 00:21:49.238 }, 00:21:49.238 { 00:21:49.238 "name": "BaseBdev2", 00:21:49.238 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:49.238 "is_configured": true, 00:21:49.238 "data_offset": 2048, 00:21:49.238 "data_size": 63488 00:21:49.238 } 00:21:49.238 ] 00:21:49.238 }' 00:21:49.238 05:04:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.238 05:04:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:49.238 05:04:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.238 05:04:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:49.239 05:04:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.496 [2024-04-27 05:04:19.345761] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:49.496 [2024-04-27 05:04:19.346085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.496 [2024-04-27 05:04:19.353199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:21:49.496 [2024-04-27 05:04:19.355905] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:49.496 05:04:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.871 "name": "raid_bdev1", 00:21:50.871 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:50.871 "strip_size_kb": 0, 00:21:50.871 "state": "online", 00:21:50.871 "raid_level": "raid1", 00:21:50.871 "superblock": true, 00:21:50.871 "num_base_bdevs": 2, 00:21:50.871 "num_base_bdevs_discovered": 2, 00:21:50.871 "num_base_bdevs_operational": 2, 00:21:50.871 "process": { 00:21:50.871 "type": "rebuild", 00:21:50.871 "target": "spare", 00:21:50.871 "progress": { 00:21:50.871 "blocks": 24576, 00:21:50.871 "percent": 38 00:21:50.871 } 00:21:50.871 }, 00:21:50.871 "base_bdevs_list": [ 00:21:50.871 { 00:21:50.871 "name": "spare", 00:21:50.871 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:50.871 "is_configured": true, 00:21:50.871 "data_offset": 2048, 00:21:50.871 "data_size": 63488 00:21:50.871 }, 00:21:50.871 { 00:21:50.871 "name": "BaseBdev2", 00:21:50.871 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:50.871 "is_configured": true, 00:21:50.871 "data_offset": 2048, 00:21:50.871 "data_size": 63488 00:21:50.871 } 00:21:50.871 ] 00:21:50.871 }' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:50.871 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@657 -- # local timeout=426 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.871 05:04:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.130 05:04:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.130 "name": "raid_bdev1", 00:21:51.130 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:51.130 "strip_size_kb": 0, 00:21:51.130 "state": "online", 00:21:51.130 "raid_level": "raid1", 00:21:51.130 "superblock": true, 00:21:51.130 "num_base_bdevs": 2, 00:21:51.130 "num_base_bdevs_discovered": 2, 00:21:51.130 "num_base_bdevs_operational": 2, 00:21:51.130 "process": { 00:21:51.130 "type": "rebuild", 00:21:51.130 "target": "spare", 00:21:51.130 "progress": { 00:21:51.130 "blocks": 32768, 00:21:51.130 "percent": 51 00:21:51.130 } 00:21:51.130 }, 00:21:51.130 "base_bdevs_list": [ 00:21:51.130 { 00:21:51.130 "name": "spare", 00:21:51.130 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:51.130 "is_configured": true, 00:21:51.130 "data_offset": 2048, 00:21:51.130 "data_size": 63488 00:21:51.130 }, 00:21:51.130 { 00:21:51.130 "name": "BaseBdev2", 00:21:51.130 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:51.130 "is_configured": true, 00:21:51.130 "data_offset": 2048, 00:21:51.130 "data_size": 63488 00:21:51.130 } 00:21:51.130 ] 00:21:51.130 }' 00:21:51.130 05:04:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.388 05:04:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.388 05:04:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.388 05:04:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.388 05:04:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.321 05:04:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.580 "name": "raid_bdev1", 00:21:52.580 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:52.580 "strip_size_kb": 0, 00:21:52.580 "state": "online", 00:21:52.580 "raid_level": "raid1", 00:21:52.580 "superblock": true, 00:21:52.580 "num_base_bdevs": 2, 00:21:52.580 "num_base_bdevs_discovered": 2, 00:21:52.580 "num_base_bdevs_operational": 2, 00:21:52.580 "process": { 00:21:52.580 "type": "rebuild", 00:21:52.580 "target": "spare", 00:21:52.580 "progress": { 00:21:52.580 "blocks": 59392, 00:21:52.580 "percent": 93 00:21:52.580 } 00:21:52.580 }, 00:21:52.580 "base_bdevs_list": [ 00:21:52.580 { 00:21:52.580 "name": "spare", 00:21:52.580 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:52.580 "is_configured": true, 00:21:52.580 "data_offset": 2048, 00:21:52.580 "data_size": 63488 00:21:52.580 }, 00:21:52.580 { 00:21:52.580 "name": "BaseBdev2", 00:21:52.580 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:52.580 "is_configured": true, 00:21:52.580 "data_offset": 2048, 00:21:52.580 "data_size": 63488 00:21:52.580 } 00:21:52.580 ] 00:21:52.580 }' 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.580 05:04:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.580 [2024-04-27 05:04:22.481502] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:52.580 [2024-04-27 05:04:22.481806] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:52.580 [2024-04-27 05:04:22.482711] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.960 05:04:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.960 "name": "raid_bdev1", 00:21:53.960 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:53.960 "strip_size_kb": 0, 00:21:53.960 "state": "online", 00:21:53.960 "raid_level": "raid1", 00:21:53.960 "superblock": true, 00:21:53.961 "num_base_bdevs": 2, 00:21:53.961 "num_base_bdevs_discovered": 2, 00:21:53.961 "num_base_bdevs_operational": 2, 00:21:53.961 "base_bdevs_list": [ 00:21:53.961 { 00:21:53.961 "name": "spare", 00:21:53.961 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:53.961 "is_configured": true, 00:21:53.961 "data_offset": 2048, 00:21:53.961 "data_size": 63488 00:21:53.961 }, 00:21:53.961 { 00:21:53.961 "name": "BaseBdev2", 00:21:53.961 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:53.961 "is_configured": true, 00:21:53.961 "data_offset": 2048, 00:21:53.961 "data_size": 63488 00:21:53.961 } 00:21:53.961 ] 00:21:53.961 }' 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@660 -- # break 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.961 05:04:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.218 05:04:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.218 "name": "raid_bdev1", 00:21:54.218 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:54.218 "strip_size_kb": 0, 00:21:54.218 "state": "online", 00:21:54.218 "raid_level": "raid1", 00:21:54.218 "superblock": true, 00:21:54.218 "num_base_bdevs": 2, 00:21:54.218 "num_base_bdevs_discovered": 2, 00:21:54.218 "num_base_bdevs_operational": 2, 00:21:54.218 "base_bdevs_list": [ 00:21:54.218 { 00:21:54.218 "name": "spare", 00:21:54.218 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:54.218 "is_configured": true, 00:21:54.218 "data_offset": 2048, 00:21:54.218 "data_size": 63488 00:21:54.218 }, 00:21:54.218 { 00:21:54.218 "name": "BaseBdev2", 00:21:54.218 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:54.218 "is_configured": true, 00:21:54.218 "data_offset": 2048, 00:21:54.218 "data_size": 63488 00:21:54.218 } 00:21:54.218 ] 00:21:54.218 }' 00:21:54.218 05:04:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.475 05:04:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.733 05:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.733 "name": "raid_bdev1", 00:21:54.733 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:54.733 "strip_size_kb": 0, 00:21:54.733 "state": "online", 00:21:54.733 "raid_level": "raid1", 00:21:54.733 "superblock": true, 00:21:54.733 "num_base_bdevs": 2, 00:21:54.733 "num_base_bdevs_discovered": 2, 00:21:54.733 "num_base_bdevs_operational": 2, 00:21:54.733 "base_bdevs_list": [ 00:21:54.733 { 00:21:54.733 "name": "spare", 00:21:54.733 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:54.733 "is_configured": true, 00:21:54.733 "data_offset": 2048, 00:21:54.733 "data_size": 63488 00:21:54.733 }, 00:21:54.733 { 00:21:54.733 "name": "BaseBdev2", 00:21:54.733 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:54.733 "is_configured": true, 00:21:54.733 "data_offset": 2048, 00:21:54.733 "data_size": 63488 00:21:54.733 } 00:21:54.733 ] 00:21:54.733 }' 00:21:54.733 05:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.733 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:21:55.300 05:04:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:55.559 [2024-04-27 05:04:25.395390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.559 [2024-04-27 05:04:25.395732] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.559 [2024-04-27 05:04:25.395995] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.559 [2024-04-27 05:04:25.396228] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.559 [2024-04-27 05:04:25.396358] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:55.559 05:04:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.559 05:04:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:55.818 05:04:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:55.818 05:04:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:55.818 05:04:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.818 05:04:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.078 /dev/nbd0 00:21:56.078 05:04:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.078 05:04:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.078 05:04:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:56.078 05:04:25 -- common/autotest_common.sh@857 -- # local i 00:21:56.078 05:04:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:56.078 05:04:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:56.078 05:04:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:56.078 05:04:25 -- common/autotest_common.sh@861 -- # break 00:21:56.078 05:04:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:56.078 05:04:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:56.078 05:04:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.078 1+0 records in 00:21:56.078 1+0 records out 00:21:56.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847514 s, 4.8 MB/s 00:21:56.078 05:04:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.078 05:04:25 -- common/autotest_common.sh@874 -- # size=4096 00:21:56.078 05:04:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.078 05:04:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:56.078 05:04:25 -- common/autotest_common.sh@877 -- # return 0 00:21:56.078 05:04:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.078 05:04:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.078 05:04:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:56.645 /dev/nbd1 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.645 05:04:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:56.645 05:04:26 -- common/autotest_common.sh@857 -- # local i 00:21:56.645 05:04:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:56.645 05:04:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:56.645 05:04:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:56.645 05:04:26 -- common/autotest_common.sh@861 -- # break 00:21:56.645 05:04:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:56.645 05:04:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:56.645 05:04:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.645 1+0 records in 00:21:56.645 1+0 records out 00:21:56.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762052 s, 5.4 MB/s 00:21:56.645 05:04:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.645 05:04:26 -- common/autotest_common.sh@874 -- # size=4096 00:21:56.645 05:04:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.645 05:04:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:56.645 05:04:26 -- common/autotest_common.sh@877 -- # return 0 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.645 05:04:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:56.645 05:04:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.645 05:04:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@41 -- # break 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.904 05:04:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@41 -- # break 00:21:57.163 05:04:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.163 05:04:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:57.163 05:04:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:57.163 05:04:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:57.163 05:04:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:57.431 05:04:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:57.703 [2024-04-27 05:04:27.426242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:57.703 [2024-04-27 05:04:27.427156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.703 [2024-04-27 05:04:27.427458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:57.703 [2024-04-27 05:04:27.427747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.703 [2024-04-27 05:04:27.430891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.703 [2024-04-27 05:04:27.431217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:57.703 [2024-04-27 05:04:27.431582] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:57.703 [2024-04-27 05:04:27.431807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.703 BaseBdev1 00:21:57.703 05:04:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:57.703 05:04:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:57.703 05:04:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:57.961 05:04:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:58.220 [2024-04-27 05:04:27.971907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:58.220 [2024-04-27 05:04:27.972791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.220 [2024-04-27 05:04:27.973114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:58.220 [2024-04-27 05:04:27.973411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.220 [2024-04-27 05:04:27.974205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.220 [2024-04-27 05:04:27.974515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:58.220 [2024-04-27 05:04:27.974876] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:58.220 [2024-04-27 05:04:27.975017] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:58.220 [2024-04-27 05:04:27.975124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.220 [2024-04-27 05:04:27.975203] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:21:58.220 [2024-04-27 05:04:27.975502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.220 BaseBdev2 00:21:58.220 05:04:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:58.479 05:04:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:58.738 [2024-04-27 05:04:28.448017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:58.738 [2024-04-27 05:04:28.448842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.738 [2024-04-27 05:04:28.449169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:58.738 [2024-04-27 05:04:28.449452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.738 [2024-04-27 05:04:28.450312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.738 [2024-04-27 05:04:28.450617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:58.738 [2024-04-27 05:04:28.450995] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:58.738 [2024-04-27 05:04:28.451185] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.738 spare 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.738 05:04:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.738 [2024-04-27 05:04:28.551390] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:58.738 [2024-04-27 05:04:28.551645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:58.738 [2024-04-27 05:04:28.551935] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:21:58.738 [2024-04-27 05:04:28.552744] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:58.738 [2024-04-27 05:04:28.552883] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:58.738 [2024-04-27 05:04:28.553215] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.996 05:04:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.996 "name": "raid_bdev1", 00:21:58.996 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:58.996 "strip_size_kb": 0, 00:21:58.996 "state": "online", 00:21:58.996 "raid_level": "raid1", 00:21:58.996 "superblock": true, 00:21:58.996 "num_base_bdevs": 2, 00:21:58.996 "num_base_bdevs_discovered": 2, 00:21:58.996 "num_base_bdevs_operational": 2, 00:21:58.996 "base_bdevs_list": [ 00:21:58.996 { 00:21:58.996 "name": "spare", 00:21:58.996 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:58.996 "is_configured": true, 00:21:58.996 "data_offset": 2048, 00:21:58.996 "data_size": 63488 00:21:58.996 }, 00:21:58.996 { 00:21:58.996 "name": "BaseBdev2", 00:21:58.996 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:58.996 "is_configured": true, 00:21:58.996 "data_offset": 2048, 00:21:58.996 "data_size": 63488 00:21:58.996 } 00:21:58.996 ] 00:21:58.996 }' 00:21:58.996 05:04:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.996 05:04:28 -- common/autotest_common.sh@10 -- # set +x 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.564 05:04:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.823 "name": "raid_bdev1", 00:21:59.823 "uuid": "6426f91f-f8e9-4768-800d-c25cdf7fc2d0", 00:21:59.823 "strip_size_kb": 0, 00:21:59.823 "state": "online", 00:21:59.823 "raid_level": "raid1", 00:21:59.823 "superblock": true, 00:21:59.823 "num_base_bdevs": 2, 00:21:59.823 "num_base_bdevs_discovered": 2, 00:21:59.823 "num_base_bdevs_operational": 2, 00:21:59.823 "base_bdevs_list": [ 00:21:59.823 { 00:21:59.823 "name": "spare", 00:21:59.823 "uuid": "20891849-5c40-52a4-a1b2-b7c468cdd078", 00:21:59.823 "is_configured": true, 00:21:59.823 "data_offset": 2048, 00:21:59.823 "data_size": 63488 00:21:59.823 }, 00:21:59.823 { 00:21:59.823 "name": "BaseBdev2", 00:21:59.823 "uuid": "6388b786-1ffe-5993-850d-c4f86352670e", 00:21:59.823 "is_configured": true, 00:21:59.823 "data_offset": 2048, 00:21:59.823 "data_size": 63488 00:21:59.823 } 00:21:59.823 ] 00:21:59.823 }' 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.823 05:04:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:00.389 05:04:29 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.389 05:04:29 -- bdev/bdev_raid.sh@709 -- # killprocess 135287 00:22:00.389 05:04:29 -- common/autotest_common.sh@926 -- # '[' -z 135287 ']' 00:22:00.389 05:04:29 -- common/autotest_common.sh@930 -- # kill -0 135287 00:22:00.389 05:04:29 -- common/autotest_common.sh@931 -- # uname 00:22:00.389 05:04:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.389 05:04:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135287 00:22:00.389 killing process with pid 135287 00:22:00.389 Received shutdown signal, test time was about 60.000000 seconds 00:22:00.389 00:22:00.389 Latency(us) 00:22:00.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.389 =================================================================================================================== 00:22:00.389 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.389 05:04:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:00.389 05:04:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:00.389 05:04:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135287' 00:22:00.389 05:04:30 -- common/autotest_common.sh@945 -- # kill 135287 00:22:00.389 05:04:30 -- common/autotest_common.sh@950 -- # wait 135287 00:22:00.389 [2024-04-27 05:04:30.008506] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.389 [2024-04-27 05:04:30.008651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.389 [2024-04-27 05:04:30.008756] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.389 [2024-04-27 05:04:30.008771] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:00.389 [2024-04-27 05:04:30.066876] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.648 ************************************ 00:22:00.648 END TEST raid_rebuild_test_sb 00:22:00.648 ************************************ 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:00.648 00:22:00.648 real 0m25.875s 00:22:00.648 user 0m37.743s 00:22:00.648 sys 0m4.093s 00:22:00.648 05:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.648 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:22:00.648 05:04:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:00.648 05:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.648 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.648 ************************************ 00:22:00.648 START TEST raid_rebuild_test_io 00:22:00.648 ************************************ 00:22:00.648 05:04:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=135916 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:00.648 05:04:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135916 /var/tmp/spdk-raid.sock 00:22:00.648 05:04:30 -- common/autotest_common.sh@819 -- # '[' -z 135916 ']' 00:22:00.648 05:04:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:00.648 05:04:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.648 05:04:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:00.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:00.648 05:04:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.648 05:04:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.907 [2024-04-27 05:04:30.581119] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:00.907 [2024-04-27 05:04:30.582795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135916 ] 00:22:00.907 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:00.907 Zero copy mechanism will not be used. 00:22:00.907 [2024-04-27 05:04:30.753680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.166 [2024-04-27 05:04:30.877119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.166 [2024-04-27 05:04:30.959996] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.733 05:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.733 05:04:31 -- common/autotest_common.sh@852 -- # return 0 00:22:01.733 05:04:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.733 05:04:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.733 05:04:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:01.992 BaseBdev1 00:22:01.992 05:04:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.992 05:04:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.992 05:04:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.251 BaseBdev2 00:22:02.251 05:04:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:02.509 spare_malloc 00:22:02.509 05:04:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:02.768 spare_delay 00:22:02.768 05:04:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:03.027 [2024-04-27 05:04:32.811739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:03.027 [2024-04-27 05:04:32.812198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.027 [2024-04-27 05:04:32.812391] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:03.027 [2024-04-27 05:04:32.812610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.027 [2024-04-27 05:04:32.815833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.027 [2024-04-27 05:04:32.816030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:03.027 spare 00:22:03.027 05:04:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:03.288 [2024-04-27 05:04:33.080606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.288 [2024-04-27 05:04:33.083130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.288 [2024-04-27 05:04:33.083402] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:22:03.288 [2024-04-27 05:04:33.083519] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:03.288 [2024-04-27 05:04:33.083852] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:03.288 [2024-04-27 05:04:33.084468] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:22:03.288 [2024-04-27 05:04:33.084619] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:22:03.288 [2024-04-27 05:04:33.085036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.288 05:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.546 05:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.546 "name": "raid_bdev1", 00:22:03.546 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:03.546 "strip_size_kb": 0, 00:22:03.546 "state": "online", 00:22:03.546 "raid_level": "raid1", 00:22:03.546 "superblock": false, 00:22:03.546 "num_base_bdevs": 2, 00:22:03.546 "num_base_bdevs_discovered": 2, 00:22:03.546 "num_base_bdevs_operational": 2, 00:22:03.546 "base_bdevs_list": [ 00:22:03.546 { 00:22:03.546 "name": "BaseBdev1", 00:22:03.546 "uuid": "9d66175c-4e08-4eb5-954d-ea5a8409ae4d", 00:22:03.546 "is_configured": true, 00:22:03.546 "data_offset": 0, 00:22:03.546 "data_size": 65536 00:22:03.546 }, 00:22:03.546 { 00:22:03.546 "name": "BaseBdev2", 00:22:03.546 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:03.546 "is_configured": true, 00:22:03.546 "data_offset": 0, 00:22:03.546 "data_size": 65536 00:22:03.546 } 00:22:03.546 ] 00:22:03.546 }' 00:22:03.546 05:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.546 05:04:33 -- common/autotest_common.sh@10 -- # set +x 00:22:04.112 05:04:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.112 05:04:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:04.370 [2024-04-27 05:04:34.273612] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.629 05:04:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:04.629 05:04:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:04.629 05:04:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.887 05:04:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:04.887 05:04:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:04.887 05:04:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:04.887 05:04:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:04.887 [2024-04-27 05:04:34.648739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:04.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:04.887 Zero copy mechanism will not be used. 00:22:04.887 Running I/O for 60 seconds... 00:22:04.887 [2024-04-27 05:04:34.790192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:05.146 [2024-04-27 05:04:34.805976] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.146 05:04:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.404 05:04:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.404 "name": "raid_bdev1", 00:22:05.404 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:05.404 "strip_size_kb": 0, 00:22:05.405 "state": "online", 00:22:05.405 "raid_level": "raid1", 00:22:05.405 "superblock": false, 00:22:05.405 "num_base_bdevs": 2, 00:22:05.405 "num_base_bdevs_discovered": 1, 00:22:05.405 "num_base_bdevs_operational": 1, 00:22:05.405 "base_bdevs_list": [ 00:22:05.405 { 00:22:05.405 "name": null, 00:22:05.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.405 "is_configured": false, 00:22:05.405 "data_offset": 0, 00:22:05.405 "data_size": 65536 00:22:05.405 }, 00:22:05.405 { 00:22:05.405 "name": "BaseBdev2", 00:22:05.405 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:05.405 "is_configured": true, 00:22:05.405 "data_offset": 0, 00:22:05.405 "data_size": 65536 00:22:05.405 } 00:22:05.405 ] 00:22:05.405 }' 00:22:05.405 05:04:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.405 05:04:35 -- common/autotest_common.sh@10 -- # set +x 00:22:05.971 05:04:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.231 [2024-04-27 05:04:36.013208] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:06.231 [2024-04-27 05:04:36.013587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.231 05:04:36 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:06.231 [2024-04-27 05:04:36.082492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:06.231 [2024-04-27 05:04:36.085130] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.490 [2024-04-27 05:04:36.203588] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.490 [2024-04-27 05:04:36.204757] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.751 [2024-04-27 05:04:36.417935] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.751 [2024-04-27 05:04:36.418681] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:07.011 [2024-04-27 05:04:36.777853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:07.270 [2024-04-27 05:04:37.052386] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.270 05:04:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.529 05:04:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:07.529 "name": "raid_bdev1", 00:22:07.529 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:07.529 "strip_size_kb": 0, 00:22:07.529 "state": "online", 00:22:07.529 "raid_level": "raid1", 00:22:07.529 "superblock": false, 00:22:07.529 "num_base_bdevs": 2, 00:22:07.529 "num_base_bdevs_discovered": 2, 00:22:07.529 "num_base_bdevs_operational": 2, 00:22:07.529 "process": { 00:22:07.529 "type": "rebuild", 00:22:07.529 "target": "spare", 00:22:07.529 "progress": { 00:22:07.529 "blocks": 12288, 00:22:07.529 "percent": 18 00:22:07.529 } 00:22:07.529 }, 00:22:07.529 "base_bdevs_list": [ 00:22:07.529 { 00:22:07.529 "name": "spare", 00:22:07.529 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:07.529 "is_configured": true, 00:22:07.529 "data_offset": 0, 00:22:07.529 "data_size": 65536 00:22:07.529 }, 00:22:07.529 { 00:22:07.529 "name": "BaseBdev2", 00:22:07.529 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:07.529 "is_configured": true, 00:22:07.529 "data_offset": 0, 00:22:07.529 "data_size": 65536 00:22:07.529 } 00:22:07.529 ] 00:22:07.529 }' 00:22:07.529 05:04:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:07.529 [2024-04-27 05:04:37.378638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:07.529 05:04:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.529 05:04:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:07.787 05:04:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.787 05:04:37 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:07.787 [2024-04-27 05:04:37.497950] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:07.787 [2024-04-27 05:04:37.498636] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:07.787 [2024-04-27 05:04:37.693498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.046 [2024-04-27 05:04:37.772170] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.046 [2024-04-27 05:04:37.783300] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.046 [2024-04-27 05:04:37.810318] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.046 05:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.305 05:04:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.305 "name": "raid_bdev1", 00:22:08.305 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:08.305 "strip_size_kb": 0, 00:22:08.305 "state": "online", 00:22:08.305 "raid_level": "raid1", 00:22:08.305 "superblock": false, 00:22:08.305 "num_base_bdevs": 2, 00:22:08.305 "num_base_bdevs_discovered": 1, 00:22:08.305 "num_base_bdevs_operational": 1, 00:22:08.305 "base_bdevs_list": [ 00:22:08.305 { 00:22:08.305 "name": null, 00:22:08.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.305 "is_configured": false, 00:22:08.305 "data_offset": 0, 00:22:08.305 "data_size": 65536 00:22:08.305 }, 00:22:08.305 { 00:22:08.305 "name": "BaseBdev2", 00:22:08.305 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:08.305 "is_configured": true, 00:22:08.305 "data_offset": 0, 00:22:08.305 "data_size": 65536 00:22:08.305 } 00:22:08.305 ] 00:22:08.305 }' 00:22:08.305 05:04:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.305 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.244 05:04:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.244 05:04:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.244 "name": "raid_bdev1", 00:22:09.244 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:09.244 "strip_size_kb": 0, 00:22:09.244 "state": "online", 00:22:09.244 "raid_level": "raid1", 00:22:09.244 "superblock": false, 00:22:09.244 "num_base_bdevs": 2, 00:22:09.244 "num_base_bdevs_discovered": 1, 00:22:09.244 "num_base_bdevs_operational": 1, 00:22:09.244 "base_bdevs_list": [ 00:22:09.244 { 00:22:09.244 "name": null, 00:22:09.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.244 "is_configured": false, 00:22:09.244 "data_offset": 0, 00:22:09.244 "data_size": 65536 00:22:09.244 }, 00:22:09.244 { 00:22:09.244 "name": "BaseBdev2", 00:22:09.244 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:09.244 "is_configured": true, 00:22:09.244 "data_offset": 0, 00:22:09.244 "data_size": 65536 00:22:09.244 } 00:22:09.244 ] 00:22:09.244 }' 00:22:09.244 05:04:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.244 05:04:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.244 05:04:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.503 05:04:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:09.503 05:04:39 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.763 [2024-04-27 05:04:39.428077] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:09.763 [2024-04-27 05:04:39.428250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.763 05:04:39 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:09.763 [2024-04-27 05:04:39.485317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:09.763 [2024-04-27 05:04:39.487931] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.763 [2024-04-27 05:04:39.632475] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:10.022 [2024-04-27 05:04:39.852650] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:10.022 [2024-04-27 05:04:39.853443] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:10.589 [2024-04-27 05:04:40.224666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.589 05:04:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.156 "name": "raid_bdev1", 00:22:11.156 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:11.156 "strip_size_kb": 0, 00:22:11.156 "state": "online", 00:22:11.156 "raid_level": "raid1", 00:22:11.156 "superblock": false, 00:22:11.156 "num_base_bdevs": 2, 00:22:11.156 "num_base_bdevs_discovered": 2, 00:22:11.156 "num_base_bdevs_operational": 2, 00:22:11.156 "process": { 00:22:11.156 "type": "rebuild", 00:22:11.156 "target": "spare", 00:22:11.156 "progress": { 00:22:11.156 "blocks": 18432, 00:22:11.156 "percent": 28 00:22:11.156 } 00:22:11.156 }, 00:22:11.156 "base_bdevs_list": [ 00:22:11.156 { 00:22:11.156 "name": "spare", 00:22:11.156 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:11.156 "is_configured": true, 00:22:11.156 "data_offset": 0, 00:22:11.156 "data_size": 65536 00:22:11.156 }, 00:22:11.156 { 00:22:11.156 "name": "BaseBdev2", 00:22:11.156 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:11.156 "is_configured": true, 00:22:11.156 "data_offset": 0, 00:22:11.156 "data_size": 65536 00:22:11.156 } 00:22:11.156 ] 00:22:11.156 }' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@657 -- # local timeout=446 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.156 05:04:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.156 [2024-04-27 05:04:40.927216] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:11.415 05:04:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.415 "name": "raid_bdev1", 00:22:11.415 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:11.415 "strip_size_kb": 0, 00:22:11.415 "state": "online", 00:22:11.415 "raid_level": "raid1", 00:22:11.415 "superblock": false, 00:22:11.416 "num_base_bdevs": 2, 00:22:11.416 "num_base_bdevs_discovered": 2, 00:22:11.416 "num_base_bdevs_operational": 2, 00:22:11.416 "process": { 00:22:11.416 "type": "rebuild", 00:22:11.416 "target": "spare", 00:22:11.416 "progress": { 00:22:11.416 "blocks": 24576, 00:22:11.416 "percent": 37 00:22:11.416 } 00:22:11.416 }, 00:22:11.416 "base_bdevs_list": [ 00:22:11.416 { 00:22:11.416 "name": "spare", 00:22:11.416 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:11.416 "is_configured": true, 00:22:11.416 "data_offset": 0, 00:22:11.416 "data_size": 65536 00:22:11.416 }, 00:22:11.416 { 00:22:11.416 "name": "BaseBdev2", 00:22:11.416 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:11.416 "is_configured": true, 00:22:11.416 "data_offset": 0, 00:22:11.416 "data_size": 65536 00:22:11.416 } 00:22:11.416 ] 00:22:11.416 }' 00:22:11.416 05:04:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.416 05:04:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.416 05:04:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.416 05:04:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.416 05:04:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.983 [2024-04-27 05:04:41.616811] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:12.550 [2024-04-27 05:04:42.214623] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.550 05:04:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.550 [2024-04-27 05:04:42.446364] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:12.808 05:04:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.808 "name": "raid_bdev1", 00:22:12.808 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:12.808 "strip_size_kb": 0, 00:22:12.808 "state": "online", 00:22:12.808 "raid_level": "raid1", 00:22:12.808 "superblock": false, 00:22:12.809 "num_base_bdevs": 2, 00:22:12.809 "num_base_bdevs_discovered": 2, 00:22:12.809 "num_base_bdevs_operational": 2, 00:22:12.809 "process": { 00:22:12.809 "type": "rebuild", 00:22:12.809 "target": "spare", 00:22:12.809 "progress": { 00:22:12.809 "blocks": 47104, 00:22:12.809 "percent": 71 00:22:12.809 } 00:22:12.809 }, 00:22:12.809 "base_bdevs_list": [ 00:22:12.809 { 00:22:12.809 "name": "spare", 00:22:12.809 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:12.809 "is_configured": true, 00:22:12.809 "data_offset": 0, 00:22:12.809 "data_size": 65536 00:22:12.809 }, 00:22:12.809 { 00:22:12.809 "name": "BaseBdev2", 00:22:12.809 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:12.809 "is_configured": true, 00:22:12.809 "data_offset": 0, 00:22:12.809 "data_size": 65536 00:22:12.809 } 00:22:12.809 ] 00:22:12.809 }' 00:22:12.809 05:04:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.809 05:04:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.809 05:04:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.809 05:04:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.809 05:04:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:13.068 [2024-04-27 05:04:42.932137] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:13.636 [2024-04-27 05:04:43.265616] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.895 05:04:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.895 [2024-04-27 05:04:43.716326] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:14.153 [2024-04-27 05:04:43.824080] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:14.153 [2024-04-27 05:04:43.827202] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.154 05:04:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.154 "name": "raid_bdev1", 00:22:14.154 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:14.154 "strip_size_kb": 0, 00:22:14.154 "state": "online", 00:22:14.154 "raid_level": "raid1", 00:22:14.154 "superblock": false, 00:22:14.154 "num_base_bdevs": 2, 00:22:14.154 "num_base_bdevs_discovered": 2, 00:22:14.154 "num_base_bdevs_operational": 2, 00:22:14.154 "base_bdevs_list": [ 00:22:14.154 { 00:22:14.154 "name": "spare", 00:22:14.154 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:14.154 "is_configured": true, 00:22:14.154 "data_offset": 0, 00:22:14.154 "data_size": 65536 00:22:14.154 }, 00:22:14.154 { 00:22:14.154 "name": "BaseBdev2", 00:22:14.154 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:14.154 "is_configured": true, 00:22:14.154 "data_offset": 0, 00:22:14.154 "data_size": 65536 00:22:14.154 } 00:22:14.154 ] 00:22:14.154 }' 00:22:14.154 05:04:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.154 05:04:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.154 05:04:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@660 -- # break 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.154 05:04:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.412 05:04:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.412 "name": "raid_bdev1", 00:22:14.412 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:14.412 "strip_size_kb": 0, 00:22:14.412 "state": "online", 00:22:14.412 "raid_level": "raid1", 00:22:14.412 "superblock": false, 00:22:14.412 "num_base_bdevs": 2, 00:22:14.412 "num_base_bdevs_discovered": 2, 00:22:14.412 "num_base_bdevs_operational": 2, 00:22:14.412 "base_bdevs_list": [ 00:22:14.412 { 00:22:14.412 "name": "spare", 00:22:14.412 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:14.412 "is_configured": true, 00:22:14.412 "data_offset": 0, 00:22:14.412 "data_size": 65536 00:22:14.412 }, 00:22:14.412 { 00:22:14.412 "name": "BaseBdev2", 00:22:14.412 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:14.412 "is_configured": true, 00:22:14.412 "data_offset": 0, 00:22:14.412 "data_size": 65536 00:22:14.412 } 00:22:14.412 ] 00:22:14.412 }' 00:22:14.412 05:04:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.671 05:04:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.672 05:04:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.930 05:04:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.930 "name": "raid_bdev1", 00:22:14.930 "uuid": "5d62f491-45e3-4958-b1f2-c18121fdb71c", 00:22:14.930 "strip_size_kb": 0, 00:22:14.930 "state": "online", 00:22:14.930 "raid_level": "raid1", 00:22:14.930 "superblock": false, 00:22:14.930 "num_base_bdevs": 2, 00:22:14.930 "num_base_bdevs_discovered": 2, 00:22:14.930 "num_base_bdevs_operational": 2, 00:22:14.930 "base_bdevs_list": [ 00:22:14.930 { 00:22:14.930 "name": "spare", 00:22:14.930 "uuid": "1e1b4e95-ee90-5884-a622-3c621563c4bf", 00:22:14.930 "is_configured": true, 00:22:14.930 "data_offset": 0, 00:22:14.930 "data_size": 65536 00:22:14.930 }, 00:22:14.930 { 00:22:14.930 "name": "BaseBdev2", 00:22:14.930 "uuid": "58ec5c26-e67b-41fe-b2e2-e3a5c5348d40", 00:22:14.930 "is_configured": true, 00:22:14.930 "data_offset": 0, 00:22:14.930 "data_size": 65536 00:22:14.930 } 00:22:14.930 ] 00:22:14.930 }' 00:22:14.930 05:04:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.930 05:04:44 -- common/autotest_common.sh@10 -- # set +x 00:22:15.498 05:04:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.757 [2024-04-27 05:04:45.501514] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.757 [2024-04-27 05:04:45.501808] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.757 00:22:15.757 Latency(us) 00:22:15.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.757 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:15.757 raid_bdev1 : 10.94 99.93 299.80 0.00 0.00 13545.62 335.13 118679.74 00:22:15.757 =================================================================================================================== 00:22:15.757 Total : 99.93 299.80 0.00 0.00 13545.62 335.13 118679.74 00:22:15.757 [2024-04-27 05:04:45.595284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.757 [2024-04-27 05:04:45.595539] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.757 0 00:22:15.757 [2024-04-27 05:04:45.595686] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.757 [2024-04-27 05:04:45.595704] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:22:15.757 05:04:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.757 05:04:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:16.015 05:04:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:16.015 05:04:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:16.015 05:04:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:16.015 05:04:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@12 -- # local i 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.016 05:04:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:16.584 /dev/nbd0 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:16.584 05:04:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:16.584 05:04:46 -- common/autotest_common.sh@857 -- # local i 00:22:16.584 05:04:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:16.584 05:04:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:16.584 05:04:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:16.584 05:04:46 -- common/autotest_common.sh@861 -- # break 00:22:16.584 05:04:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:16.584 05:04:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:16.584 05:04:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.584 1+0 records in 00:22:16.584 1+0 records out 00:22:16.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574472 s, 7.1 MB/s 00:22:16.584 05:04:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.584 05:04:46 -- common/autotest_common.sh@874 -- # size=4096 00:22:16.584 05:04:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.584 05:04:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:16.584 05:04:46 -- common/autotest_common.sh@877 -- # return 0 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.584 05:04:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:16.584 05:04:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:22:16.584 05:04:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@12 -- # local i 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:16.584 /dev/nbd1 00:22:16.584 05:04:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:16.843 05:04:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:16.843 05:04:46 -- common/autotest_common.sh@857 -- # local i 00:22:16.843 05:04:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:16.843 05:04:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:16.843 05:04:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:16.843 05:04:46 -- common/autotest_common.sh@861 -- # break 00:22:16.843 05:04:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:16.843 05:04:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:16.843 05:04:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.843 1+0 records in 00:22:16.843 1+0 records out 00:22:16.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840098 s, 4.9 MB/s 00:22:16.843 05:04:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.843 05:04:46 -- common/autotest_common.sh@874 -- # size=4096 00:22:16.843 05:04:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.843 05:04:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:16.843 05:04:46 -- common/autotest_common.sh@877 -- # return 0 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.843 05:04:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:16.843 05:04:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@51 -- # local i 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.843 05:04:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@41 -- # break 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.101 05:04:46 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.101 05:04:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@41 -- # break 00:22:17.360 05:04:47 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.360 05:04:47 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:17.360 05:04:47 -- bdev/bdev_raid.sh@709 -- # killprocess 135916 00:22:17.360 05:04:47 -- common/autotest_common.sh@926 -- # '[' -z 135916 ']' 00:22:17.360 05:04:47 -- common/autotest_common.sh@930 -- # kill -0 135916 00:22:17.360 05:04:47 -- common/autotest_common.sh@931 -- # uname 00:22:17.360 05:04:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.360 05:04:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135916 00:22:17.360 killing process with pid 135916 00:22:17.360 Received shutdown signal, test time was about 12.543670 seconds 00:22:17.360 00:22:17.360 Latency(us) 00:22:17.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.360 =================================================================================================================== 00:22:17.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.360 05:04:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:17.360 05:04:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:17.360 05:04:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135916' 00:22:17.360 05:04:47 -- common/autotest_common.sh@945 -- # kill 135916 00:22:17.360 05:04:47 -- common/autotest_common.sh@950 -- # wait 135916 00:22:17.360 [2024-04-27 05:04:47.195737] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.360 [2024-04-27 05:04:47.243833] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:17.927 00:22:17.927 real 0m17.117s 00:22:17.927 user 0m27.225s 00:22:17.927 sys 0m2.064s 00:22:17.927 05:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.927 05:04:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.927 ************************************ 00:22:17.927 END TEST raid_rebuild_test_io 00:22:17.927 ************************************ 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:22:17.927 05:04:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:17.927 05:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:17.927 05:04:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.927 ************************************ 00:22:17.927 START TEST raid_rebuild_test_sb_io 00:22:17.927 ************************************ 00:22:17.927 05:04:47 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:17.927 05:04:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=136377 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136377 /var/tmp/spdk-raid.sock 00:22:17.928 05:04:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:17.928 05:04:47 -- common/autotest_common.sh@819 -- # '[' -z 136377 ']' 00:22:17.928 05:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:17.928 05:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:17.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:17.928 05:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:17.928 05:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:17.928 05:04:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.928 [2024-04-27 05:04:47.743277] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:17.928 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:17.928 Zero copy mechanism will not be used. 00:22:17.928 [2024-04-27 05:04:47.743535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136377 ] 00:22:18.186 [2024-04-27 05:04:47.904380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.186 [2024-04-27 05:04:48.025015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.445 [2024-04-27 05:04:48.102236] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.011 05:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.011 05:04:48 -- common/autotest_common.sh@852 -- # return 0 00:22:19.011 05:04:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.011 05:04:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:19.011 05:04:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:19.270 BaseBdev1_malloc 00:22:19.270 05:04:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:19.270 [2024-04-27 05:04:49.176126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:19.270 [2024-04-27 05:04:49.176278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.270 [2024-04-27 05:04:49.176336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:19.270 [2024-04-27 05:04:49.176403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.529 [2024-04-27 05:04:49.179468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.529 [2024-04-27 05:04:49.179529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:19.529 BaseBdev1 00:22:19.529 05:04:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:19.529 05:04:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:19.529 05:04:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:19.529 BaseBdev2_malloc 00:22:19.788 05:04:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:19.788 [2024-04-27 05:04:49.647951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:19.788 [2024-04-27 05:04:49.648093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.788 [2024-04-27 05:04:49.648154] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:19.788 [2024-04-27 05:04:49.648238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.788 [2024-04-27 05:04:49.651135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.788 [2024-04-27 05:04:49.651203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:19.788 BaseBdev2 00:22:19.788 05:04:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:20.047 spare_malloc 00:22:20.047 05:04:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.306 spare_delay 00:22:20.306 05:04:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:20.564 [2024-04-27 05:04:50.418645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.564 [2024-04-27 05:04:50.418764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.564 [2024-04-27 05:04:50.418826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:20.564 [2024-04-27 05:04:50.418878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.564 [2024-04-27 05:04:50.421850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.564 [2024-04-27 05:04:50.421917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.564 spare 00:22:20.564 05:04:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:20.821 [2024-04-27 05:04:50.643215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.821 [2024-04-27 05:04:50.645834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.821 [2024-04-27 05:04:50.646141] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:20.821 [2024-04-27 05:04:50.646171] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:20.821 [2024-04-27 05:04:50.646381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:20.821 [2024-04-27 05:04:50.646892] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:20.821 [2024-04-27 05:04:50.646917] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:20.821 [2024-04-27 05:04:50.647175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.821 05:04:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.080 05:04:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.080 "name": "raid_bdev1", 00:22:21.080 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:21.080 "strip_size_kb": 0, 00:22:21.080 "state": "online", 00:22:21.080 "raid_level": "raid1", 00:22:21.080 "superblock": true, 00:22:21.080 "num_base_bdevs": 2, 00:22:21.080 "num_base_bdevs_discovered": 2, 00:22:21.080 "num_base_bdevs_operational": 2, 00:22:21.080 "base_bdevs_list": [ 00:22:21.080 { 00:22:21.080 "name": "BaseBdev1", 00:22:21.080 "uuid": "e2d43138-8526-5ebb-b2bf-8ecf806ecf79", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 }, 00:22:21.080 { 00:22:21.080 "name": "BaseBdev2", 00:22:21.080 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:21.080 "is_configured": true, 00:22:21.080 "data_offset": 2048, 00:22:21.080 "data_size": 63488 00:22:21.080 } 00:22:21.080 ] 00:22:21.080 }' 00:22:21.080 05:04:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.080 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:22:22.016 05:04:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.016 05:04:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:22.016 [2024-04-27 05:04:51.827784] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.016 05:04:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:22.016 05:04:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:22.016 05:04:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.275 05:04:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:22.275 05:04:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:22.275 05:04:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:22.275 05:04:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:22.534 [2024-04-27 05:04:52.199620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:22.534 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:22.534 Zero copy mechanism will not be used. 00:22:22.534 Running I/O for 60 seconds... 00:22:22.534 [2024-04-27 05:04:52.304499] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.534 [2024-04-27 05:04:52.312126] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.534 05:04:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.793 05:04:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.793 "name": "raid_bdev1", 00:22:22.793 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:22.793 "strip_size_kb": 0, 00:22:22.793 "state": "online", 00:22:22.793 "raid_level": "raid1", 00:22:22.793 "superblock": true, 00:22:22.793 "num_base_bdevs": 2, 00:22:22.793 "num_base_bdevs_discovered": 1, 00:22:22.793 "num_base_bdevs_operational": 1, 00:22:22.793 "base_bdevs_list": [ 00:22:22.793 { 00:22:22.793 "name": null, 00:22:22.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.793 "is_configured": false, 00:22:22.793 "data_offset": 2048, 00:22:22.793 "data_size": 63488 00:22:22.793 }, 00:22:22.793 { 00:22:22.793 "name": "BaseBdev2", 00:22:22.793 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:22.793 "is_configured": true, 00:22:22.793 "data_offset": 2048, 00:22:22.793 "data_size": 63488 00:22:22.793 } 00:22:22.793 ] 00:22:22.793 }' 00:22:22.793 05:04:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.793 05:04:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.360 05:04:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.943 [2024-04-27 05:04:53.533965] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:23.943 [2024-04-27 05:04:53.534056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.943 05:04:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:23.943 [2024-04-27 05:04:53.582634] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:23.943 [2024-04-27 05:04:53.585202] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:23.944 [2024-04-27 05:04:53.712511] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:23.944 [2024-04-27 05:04:53.713285] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:24.219 [2024-04-27 05:04:53.941034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:24.219 [2024-04-27 05:04:53.941556] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:24.477 [2024-04-27 05:04:54.323366] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:24.477 [2024-04-27 05:04:54.324338] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:24.736 [2024-04-27 05:04:54.527731] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:24.736 [2024-04-27 05:04:54.528197] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.736 05:04:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.993 [2024-04-27 05:04:54.805532] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:24.993 05:04:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.993 "name": "raid_bdev1", 00:22:24.993 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:24.993 "strip_size_kb": 0, 00:22:24.993 "state": "online", 00:22:24.993 "raid_level": "raid1", 00:22:24.993 "superblock": true, 00:22:24.993 "num_base_bdevs": 2, 00:22:24.993 "num_base_bdevs_discovered": 2, 00:22:24.993 "num_base_bdevs_operational": 2, 00:22:24.993 "process": { 00:22:24.993 "type": "rebuild", 00:22:24.993 "target": "spare", 00:22:24.993 "progress": { 00:22:24.993 "blocks": 14336, 00:22:24.993 "percent": 22 00:22:24.993 } 00:22:24.993 }, 00:22:24.993 "base_bdevs_list": [ 00:22:24.993 { 00:22:24.993 "name": "spare", 00:22:24.993 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:24.993 "is_configured": true, 00:22:24.993 "data_offset": 2048, 00:22:24.993 "data_size": 63488 00:22:24.993 }, 00:22:24.993 { 00:22:24.993 "name": "BaseBdev2", 00:22:24.993 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:24.993 "is_configured": true, 00:22:24.993 "data_offset": 2048, 00:22:24.993 "data_size": 63488 00:22:24.993 } 00:22:24.993 ] 00:22:24.993 }' 00:22:24.993 05:04:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.251 05:04:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.251 05:04:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.251 05:04:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.251 05:04:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:25.251 [2024-04-27 05:04:55.027006] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:25.251 [2024-04-27 05:04:55.027549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:25.509 [2024-04-27 05:04:55.170884] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:25.509 [2024-04-27 05:04:55.293753] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:25.509 [2024-04-27 05:04:55.303921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.509 [2024-04-27 05:04:55.322991] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.509 05:04:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.768 05:04:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.768 "name": "raid_bdev1", 00:22:25.768 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:25.768 "strip_size_kb": 0, 00:22:25.768 "state": "online", 00:22:25.768 "raid_level": "raid1", 00:22:25.768 "superblock": true, 00:22:25.768 "num_base_bdevs": 2, 00:22:25.768 "num_base_bdevs_discovered": 1, 00:22:25.768 "num_base_bdevs_operational": 1, 00:22:25.768 "base_bdevs_list": [ 00:22:25.768 { 00:22:25.768 "name": null, 00:22:25.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.768 "is_configured": false, 00:22:25.768 "data_offset": 2048, 00:22:25.768 "data_size": 63488 00:22:25.768 }, 00:22:25.768 { 00:22:25.768 "name": "BaseBdev2", 00:22:25.768 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:25.768 "is_configured": true, 00:22:25.768 "data_offset": 2048, 00:22:25.768 "data_size": 63488 00:22:25.768 } 00:22:25.768 ] 00:22:25.768 }' 00:22:25.768 05:04:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.768 05:04:55 -- common/autotest_common.sh@10 -- # set +x 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.702 05:04:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.961 "name": "raid_bdev1", 00:22:26.961 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:26.961 "strip_size_kb": 0, 00:22:26.961 "state": "online", 00:22:26.961 "raid_level": "raid1", 00:22:26.961 "superblock": true, 00:22:26.961 "num_base_bdevs": 2, 00:22:26.961 "num_base_bdevs_discovered": 1, 00:22:26.961 "num_base_bdevs_operational": 1, 00:22:26.961 "base_bdevs_list": [ 00:22:26.961 { 00:22:26.961 "name": null, 00:22:26.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.961 "is_configured": false, 00:22:26.961 "data_offset": 2048, 00:22:26.961 "data_size": 63488 00:22:26.961 }, 00:22:26.961 { 00:22:26.961 "name": "BaseBdev2", 00:22:26.961 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:26.961 "is_configured": true, 00:22:26.961 "data_offset": 2048, 00:22:26.961 "data_size": 63488 00:22:26.961 } 00:22:26.961 ] 00:22:26.961 }' 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:26.961 05:04:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.220 [2024-04-27 05:04:56.946754] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:27.220 [2024-04-27 05:04:56.946846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.220 05:04:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:27.220 [2024-04-27 05:04:57.015745] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:27.220 [2024-04-27 05:04:57.018345] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.479 [2024-04-27 05:04:57.135869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:27.479 [2024-04-27 05:04:57.137184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:27.479 [2024-04-27 05:04:57.251874] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:27.738 [2024-04-27 05:04:57.507611] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:27.738 [2024-04-27 05:04:57.637333] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.305 05:04:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.305 [2024-04-27 05:04:58.095856] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.564 "name": "raid_bdev1", 00:22:28.564 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:28.564 "strip_size_kb": 0, 00:22:28.564 "state": "online", 00:22:28.564 "raid_level": "raid1", 00:22:28.564 "superblock": true, 00:22:28.564 "num_base_bdevs": 2, 00:22:28.564 "num_base_bdevs_discovered": 2, 00:22:28.564 "num_base_bdevs_operational": 2, 00:22:28.564 "process": { 00:22:28.564 "type": "rebuild", 00:22:28.564 "target": "spare", 00:22:28.564 "progress": { 00:22:28.564 "blocks": 18432, 00:22:28.564 "percent": 29 00:22:28.564 } 00:22:28.564 }, 00:22:28.564 "base_bdevs_list": [ 00:22:28.564 { 00:22:28.564 "name": "spare", 00:22:28.564 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:28.564 "is_configured": true, 00:22:28.564 "data_offset": 2048, 00:22:28.564 "data_size": 63488 00:22:28.564 }, 00:22:28.564 { 00:22:28.564 "name": "BaseBdev2", 00:22:28.564 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:28.564 "is_configured": true, 00:22:28.564 "data_offset": 2048, 00:22:28.564 "data_size": 63488 00:22:28.564 } 00:22:28.564 ] 00:22:28.564 }' 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.564 [2024-04-27 05:04:58.343127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:28.564 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@657 -- # local timeout=464 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.564 05:04:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.823 [2024-04-27 05:04:58.477420] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:28.823 [2024-04-27 05:04:58.477861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.823 "name": "raid_bdev1", 00:22:28.823 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:28.823 "strip_size_kb": 0, 00:22:28.823 "state": "online", 00:22:28.823 "raid_level": "raid1", 00:22:28.823 "superblock": true, 00:22:28.823 "num_base_bdevs": 2, 00:22:28.823 "num_base_bdevs_discovered": 2, 00:22:28.823 "num_base_bdevs_operational": 2, 00:22:28.823 "process": { 00:22:28.823 "type": "rebuild", 00:22:28.823 "target": "spare", 00:22:28.823 "progress": { 00:22:28.823 "blocks": 22528, 00:22:28.823 "percent": 35 00:22:28.823 } 00:22:28.823 }, 00:22:28.823 "base_bdevs_list": [ 00:22:28.823 { 00:22:28.823 "name": "spare", 00:22:28.823 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:28.823 "is_configured": true, 00:22:28.823 "data_offset": 2048, 00:22:28.823 "data_size": 63488 00:22:28.823 }, 00:22:28.823 { 00:22:28.823 "name": "BaseBdev2", 00:22:28.823 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:28.823 "is_configured": true, 00:22:28.823 "data_offset": 2048, 00:22:28.823 "data_size": 63488 00:22:28.823 } 00:22:28.823 ] 00:22:28.823 }' 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.823 05:04:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.081 [2024-04-27 05:04:58.860604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:29.647 [2024-04-27 05:04:59.316336] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:29.647 [2024-04-27 05:04:59.317171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:29.647 [2024-04-27 05:04:59.545334] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:29.905 05:04:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.906 05:04:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.164 [2024-04-27 05:04:59.860182] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:30.164 05:05:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.164 "name": "raid_bdev1", 00:22:30.164 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:30.164 "strip_size_kb": 0, 00:22:30.164 "state": "online", 00:22:30.164 "raid_level": "raid1", 00:22:30.164 "superblock": true, 00:22:30.164 "num_base_bdevs": 2, 00:22:30.164 "num_base_bdevs_discovered": 2, 00:22:30.164 "num_base_bdevs_operational": 2, 00:22:30.164 "process": { 00:22:30.164 "type": "rebuild", 00:22:30.164 "target": "spare", 00:22:30.164 "progress": { 00:22:30.164 "blocks": 40960, 00:22:30.164 "percent": 64 00:22:30.164 } 00:22:30.164 }, 00:22:30.164 "base_bdevs_list": [ 00:22:30.164 { 00:22:30.164 "name": "spare", 00:22:30.164 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:30.164 "is_configured": true, 00:22:30.164 "data_offset": 2048, 00:22:30.164 "data_size": 63488 00:22:30.164 }, 00:22:30.164 { 00:22:30.164 "name": "BaseBdev2", 00:22:30.164 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:30.164 "is_configured": true, 00:22:30.164 "data_offset": 2048, 00:22:30.164 "data_size": 63488 00:22:30.164 } 00:22:30.164 ] 00:22:30.164 }' 00:22:30.164 05:05:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.164 05:05:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.164 05:05:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.422 05:05:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.422 05:05:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:30.422 [2024-04-27 05:05:00.326438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:31.384 [2024-04-27 05:05:01.100303] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.384 05:05:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.642 [2024-04-27 05:05:01.349923] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.642 "name": "raid_bdev1", 00:22:31.642 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:31.642 "strip_size_kb": 0, 00:22:31.642 "state": "online", 00:22:31.642 "raid_level": "raid1", 00:22:31.642 "superblock": true, 00:22:31.642 "num_base_bdevs": 2, 00:22:31.642 "num_base_bdevs_discovered": 2, 00:22:31.642 "num_base_bdevs_operational": 2, 00:22:31.642 "process": { 00:22:31.642 "type": "rebuild", 00:22:31.642 "target": "spare", 00:22:31.642 "progress": { 00:22:31.642 "blocks": 63488, 00:22:31.642 "percent": 100 00:22:31.642 } 00:22:31.642 }, 00:22:31.642 "base_bdevs_list": [ 00:22:31.642 { 00:22:31.642 "name": "spare", 00:22:31.642 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:31.642 "is_configured": true, 00:22:31.642 "data_offset": 2048, 00:22:31.642 "data_size": 63488 00:22:31.642 }, 00:22:31.642 { 00:22:31.642 "name": "BaseBdev2", 00:22:31.642 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:31.642 "is_configured": true, 00:22:31.642 "data_offset": 2048, 00:22:31.642 "data_size": 63488 00:22:31.642 } 00:22:31.642 ] 00:22:31.642 }' 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.642 [2024-04-27 05:05:01.449888] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:31.642 [2024-04-27 05:05:01.453197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.642 05:05:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.021 05:05:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.021 "name": "raid_bdev1", 00:22:33.021 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:33.021 "strip_size_kb": 0, 00:22:33.021 "state": "online", 00:22:33.021 "raid_level": "raid1", 00:22:33.021 "superblock": true, 00:22:33.021 "num_base_bdevs": 2, 00:22:33.021 "num_base_bdevs_discovered": 2, 00:22:33.021 "num_base_bdevs_operational": 2, 00:22:33.021 "base_bdevs_list": [ 00:22:33.021 { 00:22:33.021 "name": "spare", 00:22:33.021 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:33.021 "is_configured": true, 00:22:33.021 "data_offset": 2048, 00:22:33.021 "data_size": 63488 00:22:33.021 }, 00:22:33.021 { 00:22:33.021 "name": "BaseBdev2", 00:22:33.022 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:33.022 "is_configured": true, 00:22:33.022 "data_offset": 2048, 00:22:33.022 "data_size": 63488 00:22:33.022 } 00:22:33.022 ] 00:22:33.022 }' 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@660 -- # break 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.022 05:05:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.279 05:05:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.279 "name": "raid_bdev1", 00:22:33.279 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:33.279 "strip_size_kb": 0, 00:22:33.279 "state": "online", 00:22:33.279 "raid_level": "raid1", 00:22:33.279 "superblock": true, 00:22:33.279 "num_base_bdevs": 2, 00:22:33.279 "num_base_bdevs_discovered": 2, 00:22:33.279 "num_base_bdevs_operational": 2, 00:22:33.279 "base_bdevs_list": [ 00:22:33.279 { 00:22:33.279 "name": "spare", 00:22:33.279 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:33.279 "is_configured": true, 00:22:33.279 "data_offset": 2048, 00:22:33.279 "data_size": 63488 00:22:33.279 }, 00:22:33.279 { 00:22:33.279 "name": "BaseBdev2", 00:22:33.279 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:33.279 "is_configured": true, 00:22:33.279 "data_offset": 2048, 00:22:33.279 "data_size": 63488 00:22:33.279 } 00:22:33.279 ] 00:22:33.279 }' 00:22:33.279 05:05:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.536 05:05:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.794 05:05:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.794 "name": "raid_bdev1", 00:22:33.794 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:33.794 "strip_size_kb": 0, 00:22:33.794 "state": "online", 00:22:33.794 "raid_level": "raid1", 00:22:33.794 "superblock": true, 00:22:33.794 "num_base_bdevs": 2, 00:22:33.794 "num_base_bdevs_discovered": 2, 00:22:33.794 "num_base_bdevs_operational": 2, 00:22:33.794 "base_bdevs_list": [ 00:22:33.794 { 00:22:33.794 "name": "spare", 00:22:33.794 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:33.794 "is_configured": true, 00:22:33.794 "data_offset": 2048, 00:22:33.794 "data_size": 63488 00:22:33.794 }, 00:22:33.794 { 00:22:33.794 "name": "BaseBdev2", 00:22:33.794 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:33.794 "is_configured": true, 00:22:33.794 "data_offset": 2048, 00:22:33.794 "data_size": 63488 00:22:33.794 } 00:22:33.794 ] 00:22:33.794 }' 00:22:33.794 05:05:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.794 05:05:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.360 05:05:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:34.618 [2024-04-27 05:05:04.480413] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:34.618 [2024-04-27 05:05:04.480789] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:34.875 00:22:34.875 Latency(us) 00:22:34.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.875 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:34.875 raid_bdev1 : 12.36 94.11 282.34 0.00 0.00 14734.36 336.99 118203.11 00:22:34.875 =================================================================================================================== 00:22:34.875 Total : 94.11 282.34 0.00 0.00 14734.36 336.99 118203.11 00:22:34.875 [2024-04-27 05:05:04.566354] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.875 [2024-04-27 05:05:04.566599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.875 0 00:22:34.875 [2024-04-27 05:05:04.566769] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.875 [2024-04-27 05:05:04.566789] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:34.875 05:05:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.875 05:05:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:35.134 05:05:04 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:35.134 05:05:04 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:35.134 05:05:04 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.134 05:05:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:35.393 /dev/nbd0 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.393 05:05:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:35.393 05:05:05 -- common/autotest_common.sh@857 -- # local i 00:22:35.393 05:05:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:35.393 05:05:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:35.393 05:05:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:35.393 05:05:05 -- common/autotest_common.sh@861 -- # break 00:22:35.393 05:05:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:35.393 05:05:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:35.393 05:05:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.393 1+0 records in 00:22:35.393 1+0 records out 00:22:35.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918997 s, 4.5 MB/s 00:22:35.393 05:05:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.393 05:05:05 -- common/autotest_common.sh@874 -- # size=4096 00:22:35.393 05:05:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.393 05:05:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:35.393 05:05:05 -- common/autotest_common.sh@877 -- # return 0 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.393 05:05:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:35.393 05:05:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:22:35.393 05:05:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.393 05:05:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:35.652 /dev/nbd1 00:22:35.652 05:05:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:35.652 05:05:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:35.652 05:05:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:35.652 05:05:05 -- common/autotest_common.sh@857 -- # local i 00:22:35.652 05:05:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:35.652 05:05:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:35.652 05:05:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:35.652 05:05:05 -- common/autotest_common.sh@861 -- # break 00:22:35.652 05:05:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:35.652 05:05:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:35.652 05:05:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.652 1+0 records in 00:22:35.652 1+0 records out 00:22:35.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614913 s, 6.7 MB/s 00:22:35.652 05:05:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.652 05:05:05 -- common/autotest_common.sh@874 -- # size=4096 00:22:35.652 05:05:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.652 05:05:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:35.652 05:05:05 -- common/autotest_common.sh@877 -- # return 0 00:22:35.652 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.652 05:05:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.652 05:05:05 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:35.911 05:05:05 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@51 -- # local i 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:35.911 05:05:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@41 -- # break 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.171 05:05:05 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.171 05:05:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@41 -- # break 00:22:36.429 05:05:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.429 05:05:06 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:36.429 05:05:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:36.429 05:05:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:36.429 05:05:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:36.690 05:05:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:36.949 [2024-04-27 05:05:06.639873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:36.949 [2024-04-27 05:05:06.640318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.949 [2024-04-27 05:05:06.640499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:36.949 [2024-04-27 05:05:06.640664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.949 [2024-04-27 05:05:06.643579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.949 [2024-04-27 05:05:06.643783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:36.949 [2024-04-27 05:05:06.644055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:36.949 [2024-04-27 05:05:06.644237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.949 BaseBdev1 00:22:36.949 05:05:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:36.949 05:05:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:36.949 05:05:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:37.208 05:05:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:37.466 [2024-04-27 05:05:07.184369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:37.466 [2024-04-27 05:05:07.184824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.466 [2024-04-27 05:05:07.184918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:37.466 [2024-04-27 05:05:07.185177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.466 [2024-04-27 05:05:07.185777] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.466 [2024-04-27 05:05:07.185993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:37.466 [2024-04-27 05:05:07.186237] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:37.466 [2024-04-27 05:05:07.186358] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:37.466 [2024-04-27 05:05:07.186465] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.466 [2024-04-27 05:05:07.186562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:22:37.466 [2024-04-27 05:05:07.186848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:37.466 BaseBdev2 00:22:37.466 05:05:07 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:37.725 05:05:07 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:37.997 [2024-04-27 05:05:07.664616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:37.997 [2024-04-27 05:05:07.665540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.997 [2024-04-27 05:05:07.665643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:37.997 [2024-04-27 05:05:07.665844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.997 [2024-04-27 05:05:07.666538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.997 [2024-04-27 05:05:07.666739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:37.997 [2024-04-27 05:05:07.667001] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:37.997 [2024-04-27 05:05:07.667183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:37.997 spare 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.997 05:05:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.997 [2024-04-27 05:05:07.767489] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:22:37.997 [2024-04-27 05:05:07.767804] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:37.997 [2024-04-27 05:05:07.768098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:22:37.997 [2024-04-27 05:05:07.768839] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:22:37.997 [2024-04-27 05:05:07.768967] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:22:37.997 [2024-04-27 05:05:07.769273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.268 05:05:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.268 "name": "raid_bdev1", 00:22:38.268 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:38.268 "strip_size_kb": 0, 00:22:38.268 "state": "online", 00:22:38.268 "raid_level": "raid1", 00:22:38.268 "superblock": true, 00:22:38.268 "num_base_bdevs": 2, 00:22:38.268 "num_base_bdevs_discovered": 2, 00:22:38.268 "num_base_bdevs_operational": 2, 00:22:38.268 "base_bdevs_list": [ 00:22:38.268 { 00:22:38.268 "name": "spare", 00:22:38.268 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:38.268 "is_configured": true, 00:22:38.268 "data_offset": 2048, 00:22:38.268 "data_size": 63488 00:22:38.268 }, 00:22:38.268 { 00:22:38.268 "name": "BaseBdev2", 00:22:38.268 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:38.268 "is_configured": true, 00:22:38.268 "data_offset": 2048, 00:22:38.268 "data_size": 63488 00:22:38.268 } 00:22:38.268 ] 00:22:38.268 }' 00:22:38.268 05:05:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.268 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.836 05:05:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.095 "name": "raid_bdev1", 00:22:39.095 "uuid": "0ad562b2-a95a-413e-9e85-37874cd3fff5", 00:22:39.095 "strip_size_kb": 0, 00:22:39.095 "state": "online", 00:22:39.095 "raid_level": "raid1", 00:22:39.095 "superblock": true, 00:22:39.095 "num_base_bdevs": 2, 00:22:39.095 "num_base_bdevs_discovered": 2, 00:22:39.095 "num_base_bdevs_operational": 2, 00:22:39.095 "base_bdevs_list": [ 00:22:39.095 { 00:22:39.095 "name": "spare", 00:22:39.095 "uuid": "6343e7c9-9eec-5b92-a44d-49ab5694e6f5", 00:22:39.095 "is_configured": true, 00:22:39.095 "data_offset": 2048, 00:22:39.095 "data_size": 63488 00:22:39.095 }, 00:22:39.095 { 00:22:39.095 "name": "BaseBdev2", 00:22:39.095 "uuid": "426562e0-cad4-598e-8fa4-a203eea1614b", 00:22:39.095 "is_configured": true, 00:22:39.095 "data_offset": 2048, 00:22:39.095 "data_size": 63488 00:22:39.095 } 00:22:39.095 ] 00:22:39.095 }' 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.095 05:05:08 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:39.353 05:05:09 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.353 05:05:09 -- bdev/bdev_raid.sh@709 -- # killprocess 136377 00:22:39.353 05:05:09 -- common/autotest_common.sh@926 -- # '[' -z 136377 ']' 00:22:39.353 05:05:09 -- common/autotest_common.sh@930 -- # kill -0 136377 00:22:39.353 05:05:09 -- common/autotest_common.sh@931 -- # uname 00:22:39.353 05:05:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:39.353 05:05:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136377 00:22:39.353 05:05:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:39.353 05:05:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:39.353 05:05:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136377' 00:22:39.353 killing process with pid 136377 00:22:39.353 05:05:09 -- common/autotest_common.sh@945 -- # kill 136377 00:22:39.354 Received shutdown signal, test time was about 17.037032 seconds 00:22:39.354 00:22:39.354 Latency(us) 00:22:39.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.354 =================================================================================================================== 00:22:39.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.354 05:05:09 -- common/autotest_common.sh@950 -- # wait 136377 00:22:39.354 [2024-04-27 05:05:09.240055] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.354 [2024-04-27 05:05:09.240192] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.354 [2024-04-27 05:05:09.240289] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.354 [2024-04-27 05:05:09.240305] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:22:39.612 [2024-04-27 05:05:09.288889] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:39.871 00:22:39.871 real 0m21.975s 00:22:39.871 user 0m35.712s 00:22:39.871 sys 0m2.469s 00:22:39.871 05:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.871 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.871 ************************************ 00:22:39.871 END TEST raid_rebuild_test_sb_io 00:22:39.871 ************************************ 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:22:39.871 05:05:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:39.871 05:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:39.871 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.871 ************************************ 00:22:39.871 START TEST raid_rebuild_test 00:22:39.871 ************************************ 00:22:39.871 05:05:09 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=136957 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136957 /var/tmp/spdk-raid.sock 00:22:39.871 05:05:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:39.871 05:05:09 -- common/autotest_common.sh@819 -- # '[' -z 136957 ']' 00:22:39.871 05:05:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.871 05:05:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.871 05:05:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.871 05:05:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.871 05:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:40.130 [2024-04-27 05:05:09.786781] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:40.130 [2024-04-27 05:05:09.787706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136957 ] 00:22:40.130 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:40.130 Zero copy mechanism will not be used. 00:22:40.130 [2024-04-27 05:05:09.947432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.388 [2024-04-27 05:05:10.069781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.388 [2024-04-27 05:05:10.147819] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.955 05:05:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.955 05:05:10 -- common/autotest_common.sh@852 -- # return 0 00:22:40.955 05:05:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:40.955 05:05:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:40.955 05:05:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:41.213 BaseBdev1 00:22:41.213 05:05:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:41.213 05:05:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:41.213 05:05:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:41.472 BaseBdev2 00:22:41.472 05:05:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:41.472 05:05:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:41.472 05:05:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:41.731 BaseBdev3 00:22:41.989 05:05:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:41.989 05:05:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:41.989 05:05:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:41.989 BaseBdev4 00:22:41.989 05:05:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:42.248 spare_malloc 00:22:42.248 05:05:12 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:42.507 spare_delay 00:22:42.507 05:05:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:42.765 [2024-04-27 05:05:12.641675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:42.765 [2024-04-27 05:05:12.642150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.765 [2024-04-27 05:05:12.642375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:42.765 [2024-04-27 05:05:12.642585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.765 [2024-04-27 05:05:12.645825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.765 [2024-04-27 05:05:12.646046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:42.765 spare 00:22:42.765 05:05:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:43.024 [2024-04-27 05:05:12.882643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.024 [2024-04-27 05:05:12.885431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.024 [2024-04-27 05:05:12.885637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.024 [2024-04-27 05:05:12.885728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.024 [2024-04-27 05:05:12.885958] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:43.024 [2024-04-27 05:05:12.886088] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:43.024 [2024-04-27 05:05:12.886364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:43.024 [2024-04-27 05:05:12.886958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:43.024 [2024-04-27 05:05:12.887090] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:43.024 [2024-04-27 05:05:12.887477] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.024 05:05:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.283 05:05:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.283 "name": "raid_bdev1", 00:22:43.283 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:43.283 "strip_size_kb": 0, 00:22:43.283 "state": "online", 00:22:43.283 "raid_level": "raid1", 00:22:43.283 "superblock": false, 00:22:43.283 "num_base_bdevs": 4, 00:22:43.283 "num_base_bdevs_discovered": 4, 00:22:43.283 "num_base_bdevs_operational": 4, 00:22:43.283 "base_bdevs_list": [ 00:22:43.283 { 00:22:43.283 "name": "BaseBdev1", 00:22:43.283 "uuid": "b3457418-4103-4b8b-aea5-e0842126d27d", 00:22:43.283 "is_configured": true, 00:22:43.283 "data_offset": 0, 00:22:43.283 "data_size": 65536 00:22:43.283 }, 00:22:43.283 { 00:22:43.283 "name": "BaseBdev2", 00:22:43.283 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:43.283 "is_configured": true, 00:22:43.283 "data_offset": 0, 00:22:43.283 "data_size": 65536 00:22:43.283 }, 00:22:43.283 { 00:22:43.283 "name": "BaseBdev3", 00:22:43.283 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:43.283 "is_configured": true, 00:22:43.283 "data_offset": 0, 00:22:43.283 "data_size": 65536 00:22:43.283 }, 00:22:43.283 { 00:22:43.283 "name": "BaseBdev4", 00:22:43.283 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:43.283 "is_configured": true, 00:22:43.283 "data_offset": 0, 00:22:43.283 "data_size": 65536 00:22:43.283 } 00:22:43.283 ] 00:22:43.283 }' 00:22:43.283 05:05:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.283 05:05:13 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 05:05:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:44.216 05:05:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:44.216 [2024-04-27 05:05:14.064082] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.216 05:05:14 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:44.216 05:05:14 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.216 05:05:14 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:44.475 05:05:14 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:44.475 05:05:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:44.475 05:05:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:44.475 05:05:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@12 -- # local i 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.475 05:05:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:44.733 [2024-04-27 05:05:14.547940] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:44.733 /dev/nbd0 00:22:44.733 05:05:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:44.733 05:05:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:44.733 05:05:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:44.733 05:05:14 -- common/autotest_common.sh@857 -- # local i 00:22:44.733 05:05:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:44.733 05:05:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:44.733 05:05:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:44.733 05:05:14 -- common/autotest_common.sh@861 -- # break 00:22:44.733 05:05:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:44.733 05:05:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:44.733 05:05:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.733 1+0 records in 00:22:44.733 1+0 records out 00:22:44.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159614 s, 2.6 MB/s 00:22:44.733 05:05:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.733 05:05:14 -- common/autotest_common.sh@874 -- # size=4096 00:22:44.733 05:05:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.733 05:05:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:44.733 05:05:14 -- common/autotest_common.sh@877 -- # return 0 00:22:44.733 05:05:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.733 05:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:44.733 05:05:14 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:44.733 05:05:14 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:44.734 05:05:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:51.293 65536+0 records in 00:22:51.293 65536+0 records out 00:22:51.293 33554432 bytes (34 MB, 32 MiB) copied, 6.20782 s, 5.4 MB/s 00:22:51.293 05:05:20 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@51 -- # local i 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:51.293 05:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:51.293 05:05:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:51.294 [2024-04-27 05:05:21.109549] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@41 -- # break 00:22:51.294 05:05:21 -- bdev/nbd_common.sh@45 -- # return 0 00:22:51.294 05:05:21 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:51.552 [2024-04-27 05:05:21.325251] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.552 05:05:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.810 05:05:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.810 "name": "raid_bdev1", 00:22:51.810 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:51.810 "strip_size_kb": 0, 00:22:51.810 "state": "online", 00:22:51.810 "raid_level": "raid1", 00:22:51.810 "superblock": false, 00:22:51.810 "num_base_bdevs": 4, 00:22:51.810 "num_base_bdevs_discovered": 3, 00:22:51.810 "num_base_bdevs_operational": 3, 00:22:51.810 "base_bdevs_list": [ 00:22:51.810 { 00:22:51.810 "name": null, 00:22:51.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.810 "is_configured": false, 00:22:51.810 "data_offset": 0, 00:22:51.810 "data_size": 65536 00:22:51.810 }, 00:22:51.810 { 00:22:51.810 "name": "BaseBdev2", 00:22:51.810 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:51.810 "is_configured": true, 00:22:51.810 "data_offset": 0, 00:22:51.810 "data_size": 65536 00:22:51.810 }, 00:22:51.810 { 00:22:51.810 "name": "BaseBdev3", 00:22:51.810 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:51.810 "is_configured": true, 00:22:51.810 "data_offset": 0, 00:22:51.810 "data_size": 65536 00:22:51.810 }, 00:22:51.810 { 00:22:51.810 "name": "BaseBdev4", 00:22:51.810 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:51.810 "is_configured": true, 00:22:51.810 "data_offset": 0, 00:22:51.810 "data_size": 65536 00:22:51.810 } 00:22:51.810 ] 00:22:51.810 }' 00:22:51.810 05:05:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.810 05:05:21 -- common/autotest_common.sh@10 -- # set +x 00:22:52.376 05:05:22 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.633 [2024-04-27 05:05:22.481598] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:52.633 [2024-04-27 05:05:22.481966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.633 [2024-04-27 05:05:22.487913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:22:52.633 [2024-04-27 05:05:22.490700] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.633 05:05:22 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.006 "name": "raid_bdev1", 00:22:54.006 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:54.006 "strip_size_kb": 0, 00:22:54.006 "state": "online", 00:22:54.006 "raid_level": "raid1", 00:22:54.006 "superblock": false, 00:22:54.006 "num_base_bdevs": 4, 00:22:54.006 "num_base_bdevs_discovered": 4, 00:22:54.006 "num_base_bdevs_operational": 4, 00:22:54.006 "process": { 00:22:54.006 "type": "rebuild", 00:22:54.006 "target": "spare", 00:22:54.006 "progress": { 00:22:54.006 "blocks": 24576, 00:22:54.006 "percent": 37 00:22:54.006 } 00:22:54.006 }, 00:22:54.006 "base_bdevs_list": [ 00:22:54.006 { 00:22:54.006 "name": "spare", 00:22:54.006 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:54.006 "is_configured": true, 00:22:54.006 "data_offset": 0, 00:22:54.006 "data_size": 65536 00:22:54.006 }, 00:22:54.006 { 00:22:54.006 "name": "BaseBdev2", 00:22:54.006 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:54.006 "is_configured": true, 00:22:54.006 "data_offset": 0, 00:22:54.006 "data_size": 65536 00:22:54.006 }, 00:22:54.006 { 00:22:54.006 "name": "BaseBdev3", 00:22:54.006 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:54.006 "is_configured": true, 00:22:54.006 "data_offset": 0, 00:22:54.006 "data_size": 65536 00:22:54.006 }, 00:22:54.006 { 00:22:54.006 "name": "BaseBdev4", 00:22:54.006 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:54.006 "is_configured": true, 00:22:54.006 "data_offset": 0, 00:22:54.006 "data_size": 65536 00:22:54.006 } 00:22:54.006 ] 00:22:54.006 }' 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.006 05:05:23 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.283 [2024-04-27 05:05:24.096425] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.283 [2024-04-27 05:05:24.105090] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.283 [2024-04-27 05:05:24.105405] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.283 05:05:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.561 05:05:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.561 "name": "raid_bdev1", 00:22:54.561 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:54.561 "strip_size_kb": 0, 00:22:54.561 "state": "online", 00:22:54.561 "raid_level": "raid1", 00:22:54.561 "superblock": false, 00:22:54.561 "num_base_bdevs": 4, 00:22:54.561 "num_base_bdevs_discovered": 3, 00:22:54.561 "num_base_bdevs_operational": 3, 00:22:54.561 "base_bdevs_list": [ 00:22:54.561 { 00:22:54.561 "name": null, 00:22:54.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.561 "is_configured": false, 00:22:54.561 "data_offset": 0, 00:22:54.561 "data_size": 65536 00:22:54.561 }, 00:22:54.561 { 00:22:54.561 "name": "BaseBdev2", 00:22:54.561 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:54.561 "is_configured": true, 00:22:54.561 "data_offset": 0, 00:22:54.561 "data_size": 65536 00:22:54.561 }, 00:22:54.561 { 00:22:54.561 "name": "BaseBdev3", 00:22:54.561 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:54.561 "is_configured": true, 00:22:54.561 "data_offset": 0, 00:22:54.561 "data_size": 65536 00:22:54.561 }, 00:22:54.561 { 00:22:54.561 "name": "BaseBdev4", 00:22:54.561 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:54.561 "is_configured": true, 00:22:54.561 "data_offset": 0, 00:22:54.561 "data_size": 65536 00:22:54.561 } 00:22:54.561 ] 00:22:54.561 }' 00:22:54.561 05:05:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.561 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.128 05:05:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.387 05:05:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.387 "name": "raid_bdev1", 00:22:55.387 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:55.387 "strip_size_kb": 0, 00:22:55.387 "state": "online", 00:22:55.387 "raid_level": "raid1", 00:22:55.387 "superblock": false, 00:22:55.387 "num_base_bdevs": 4, 00:22:55.387 "num_base_bdevs_discovered": 3, 00:22:55.387 "num_base_bdevs_operational": 3, 00:22:55.387 "base_bdevs_list": [ 00:22:55.387 { 00:22:55.387 "name": null, 00:22:55.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.387 "is_configured": false, 00:22:55.387 "data_offset": 0, 00:22:55.387 "data_size": 65536 00:22:55.387 }, 00:22:55.387 { 00:22:55.387 "name": "BaseBdev2", 00:22:55.387 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:55.387 "is_configured": true, 00:22:55.387 "data_offset": 0, 00:22:55.387 "data_size": 65536 00:22:55.387 }, 00:22:55.387 { 00:22:55.387 "name": "BaseBdev3", 00:22:55.387 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:55.387 "is_configured": true, 00:22:55.387 "data_offset": 0, 00:22:55.387 "data_size": 65536 00:22:55.387 }, 00:22:55.387 { 00:22:55.387 "name": "BaseBdev4", 00:22:55.387 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:55.387 "is_configured": true, 00:22:55.387 "data_offset": 0, 00:22:55.387 "data_size": 65536 00:22:55.387 } 00:22:55.387 ] 00:22:55.387 }' 00:22:55.387 05:05:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.645 05:05:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.645 05:05:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.645 05:05:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:55.645 05:05:25 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:55.904 [2024-04-27 05:05:25.597336] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:55.904 [2024-04-27 05:05:25.597606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.904 [2024-04-27 05:05:25.603384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:22:55.904 [2024-04-27 05:05:25.606038] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.904 05:05:25 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.841 05:05:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.100 "name": "raid_bdev1", 00:22:57.100 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:57.100 "strip_size_kb": 0, 00:22:57.100 "state": "online", 00:22:57.100 "raid_level": "raid1", 00:22:57.100 "superblock": false, 00:22:57.100 "num_base_bdevs": 4, 00:22:57.100 "num_base_bdevs_discovered": 4, 00:22:57.100 "num_base_bdevs_operational": 4, 00:22:57.100 "process": { 00:22:57.100 "type": "rebuild", 00:22:57.100 "target": "spare", 00:22:57.100 "progress": { 00:22:57.100 "blocks": 24576, 00:22:57.100 "percent": 37 00:22:57.100 } 00:22:57.100 }, 00:22:57.100 "base_bdevs_list": [ 00:22:57.100 { 00:22:57.100 "name": "spare", 00:22:57.100 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:57.100 "is_configured": true, 00:22:57.100 "data_offset": 0, 00:22:57.100 "data_size": 65536 00:22:57.100 }, 00:22:57.100 { 00:22:57.100 "name": "BaseBdev2", 00:22:57.100 "uuid": "03790a10-7618-4846-8a91-6d22a6d64cc7", 00:22:57.100 "is_configured": true, 00:22:57.100 "data_offset": 0, 00:22:57.100 "data_size": 65536 00:22:57.100 }, 00:22:57.100 { 00:22:57.100 "name": "BaseBdev3", 00:22:57.100 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:57.100 "is_configured": true, 00:22:57.100 "data_offset": 0, 00:22:57.100 "data_size": 65536 00:22:57.100 }, 00:22:57.100 { 00:22:57.100 "name": "BaseBdev4", 00:22:57.100 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:57.100 "is_configured": true, 00:22:57.100 "data_offset": 0, 00:22:57.100 "data_size": 65536 00:22:57.100 } 00:22:57.100 ] 00:22:57.100 }' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:57.100 05:05:26 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:57.359 [2024-04-27 05:05:27.255825] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:57.618 [2024-04-27 05:05:27.319175] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.618 05:05:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.877 "name": "raid_bdev1", 00:22:57.877 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:57.877 "strip_size_kb": 0, 00:22:57.877 "state": "online", 00:22:57.877 "raid_level": "raid1", 00:22:57.877 "superblock": false, 00:22:57.877 "num_base_bdevs": 4, 00:22:57.877 "num_base_bdevs_discovered": 3, 00:22:57.877 "num_base_bdevs_operational": 3, 00:22:57.877 "process": { 00:22:57.877 "type": "rebuild", 00:22:57.877 "target": "spare", 00:22:57.877 "progress": { 00:22:57.877 "blocks": 38912, 00:22:57.877 "percent": 59 00:22:57.877 } 00:22:57.877 }, 00:22:57.877 "base_bdevs_list": [ 00:22:57.877 { 00:22:57.877 "name": "spare", 00:22:57.877 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:57.877 "is_configured": true, 00:22:57.877 "data_offset": 0, 00:22:57.877 "data_size": 65536 00:22:57.877 }, 00:22:57.877 { 00:22:57.877 "name": null, 00:22:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.877 "is_configured": false, 00:22:57.877 "data_offset": 0, 00:22:57.877 "data_size": 65536 00:22:57.877 }, 00:22:57.877 { 00:22:57.877 "name": "BaseBdev3", 00:22:57.877 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:57.877 "is_configured": true, 00:22:57.877 "data_offset": 0, 00:22:57.877 "data_size": 65536 00:22:57.877 }, 00:22:57.877 { 00:22:57.877 "name": "BaseBdev4", 00:22:57.877 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:57.877 "is_configured": true, 00:22:57.877 "data_offset": 0, 00:22:57.877 "data_size": 65536 00:22:57.877 } 00:22:57.877 ] 00:22:57.877 }' 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@657 -- # local timeout=493 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.877 05:05:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.878 05:05:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.137 05:05:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.137 "name": "raid_bdev1", 00:22:58.137 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:58.137 "strip_size_kb": 0, 00:22:58.137 "state": "online", 00:22:58.137 "raid_level": "raid1", 00:22:58.137 "superblock": false, 00:22:58.137 "num_base_bdevs": 4, 00:22:58.137 "num_base_bdevs_discovered": 3, 00:22:58.137 "num_base_bdevs_operational": 3, 00:22:58.137 "process": { 00:22:58.137 "type": "rebuild", 00:22:58.137 "target": "spare", 00:22:58.137 "progress": { 00:22:58.137 "blocks": 45056, 00:22:58.137 "percent": 68 00:22:58.137 } 00:22:58.137 }, 00:22:58.137 "base_bdevs_list": [ 00:22:58.137 { 00:22:58.137 "name": "spare", 00:22:58.137 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:58.137 "is_configured": true, 00:22:58.137 "data_offset": 0, 00:22:58.137 "data_size": 65536 00:22:58.137 }, 00:22:58.137 { 00:22:58.137 "name": null, 00:22:58.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.137 "is_configured": false, 00:22:58.137 "data_offset": 0, 00:22:58.137 "data_size": 65536 00:22:58.137 }, 00:22:58.137 { 00:22:58.137 "name": "BaseBdev3", 00:22:58.137 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:58.137 "is_configured": true, 00:22:58.137 "data_offset": 0, 00:22:58.137 "data_size": 65536 00:22:58.137 }, 00:22:58.137 { 00:22:58.137 "name": "BaseBdev4", 00:22:58.137 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:58.137 "is_configured": true, 00:22:58.137 "data_offset": 0, 00:22:58.137 "data_size": 65536 00:22:58.137 } 00:22:58.137 ] 00:22:58.137 }' 00:22:58.137 05:05:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.137 05:05:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.137 05:05:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.137 05:05:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.137 05:05:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:59.073 [2024-04-27 05:05:28.831878] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:59.073 [2024-04-27 05:05:28.832371] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:59.073 [2024-04-27 05:05:28.832616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.332 05:05:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.590 05:05:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.590 "name": "raid_bdev1", 00:22:59.590 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:59.590 "strip_size_kb": 0, 00:22:59.590 "state": "online", 00:22:59.590 "raid_level": "raid1", 00:22:59.590 "superblock": false, 00:22:59.590 "num_base_bdevs": 4, 00:22:59.590 "num_base_bdevs_discovered": 3, 00:22:59.590 "num_base_bdevs_operational": 3, 00:22:59.590 "base_bdevs_list": [ 00:22:59.590 { 00:22:59.590 "name": "spare", 00:22:59.590 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:59.590 "is_configured": true, 00:22:59.590 "data_offset": 0, 00:22:59.590 "data_size": 65536 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": null, 00:22:59.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.590 "is_configured": false, 00:22:59.590 "data_offset": 0, 00:22:59.590 "data_size": 65536 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "BaseBdev3", 00:22:59.590 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:59.590 "is_configured": true, 00:22:59.590 "data_offset": 0, 00:22:59.590 "data_size": 65536 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "BaseBdev4", 00:22:59.590 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:59.590 "is_configured": true, 00:22:59.590 "data_offset": 0, 00:22:59.591 "data_size": 65536 00:22:59.591 } 00:22:59.591 ] 00:22:59.591 }' 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@660 -- # break 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.591 05:05:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.850 05:05:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.850 "name": "raid_bdev1", 00:22:59.850 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:22:59.850 "strip_size_kb": 0, 00:22:59.850 "state": "online", 00:22:59.850 "raid_level": "raid1", 00:22:59.850 "superblock": false, 00:22:59.850 "num_base_bdevs": 4, 00:22:59.850 "num_base_bdevs_discovered": 3, 00:22:59.850 "num_base_bdevs_operational": 3, 00:22:59.850 "base_bdevs_list": [ 00:22:59.850 { 00:22:59.850 "name": "spare", 00:22:59.850 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:22:59.850 "is_configured": true, 00:22:59.850 "data_offset": 0, 00:22:59.850 "data_size": 65536 00:22:59.850 }, 00:22:59.850 { 00:22:59.850 "name": null, 00:22:59.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.850 "is_configured": false, 00:22:59.850 "data_offset": 0, 00:22:59.850 "data_size": 65536 00:22:59.850 }, 00:22:59.850 { 00:22:59.850 "name": "BaseBdev3", 00:22:59.850 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:22:59.850 "is_configured": true, 00:22:59.850 "data_offset": 0, 00:22:59.850 "data_size": 65536 00:22:59.850 }, 00:22:59.850 { 00:22:59.850 "name": "BaseBdev4", 00:22:59.850 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:22:59.850 "is_configured": true, 00:22:59.850 "data_offset": 0, 00:22:59.850 "data_size": 65536 00:22:59.850 } 00:22:59.850 ] 00:22:59.850 }' 00:22:59.850 05:05:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.850 05:05:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:59.850 05:05:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.108 05:05:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.367 05:05:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.367 "name": "raid_bdev1", 00:23:00.367 "uuid": "94f1f68c-c7ff-4d3b-9d38-87cc72da4e67", 00:23:00.367 "strip_size_kb": 0, 00:23:00.367 "state": "online", 00:23:00.367 "raid_level": "raid1", 00:23:00.367 "superblock": false, 00:23:00.367 "num_base_bdevs": 4, 00:23:00.368 "num_base_bdevs_discovered": 3, 00:23:00.368 "num_base_bdevs_operational": 3, 00:23:00.368 "base_bdevs_list": [ 00:23:00.368 { 00:23:00.368 "name": "spare", 00:23:00.368 "uuid": "07db1e71-bca3-5978-b888-fad7a67a724c", 00:23:00.368 "is_configured": true, 00:23:00.368 "data_offset": 0, 00:23:00.368 "data_size": 65536 00:23:00.368 }, 00:23:00.368 { 00:23:00.368 "name": null, 00:23:00.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.368 "is_configured": false, 00:23:00.368 "data_offset": 0, 00:23:00.368 "data_size": 65536 00:23:00.368 }, 00:23:00.368 { 00:23:00.368 "name": "BaseBdev3", 00:23:00.368 "uuid": "3b689376-f506-489b-9d32-aa592e8e46f5", 00:23:00.368 "is_configured": true, 00:23:00.368 "data_offset": 0, 00:23:00.368 "data_size": 65536 00:23:00.368 }, 00:23:00.368 { 00:23:00.368 "name": "BaseBdev4", 00:23:00.368 "uuid": "c373d4d8-7e25-407a-b0e3-2af3b99f0c8d", 00:23:00.368 "is_configured": true, 00:23:00.368 "data_offset": 0, 00:23:00.368 "data_size": 65536 00:23:00.368 } 00:23:00.368 ] 00:23:00.368 }' 00:23:00.368 05:05:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.368 05:05:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.934 05:05:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:01.220 [2024-04-27 05:05:30.924295] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.220 [2024-04-27 05:05:30.924513] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.220 [2024-04-27 05:05:30.924812] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.220 [2024-04-27 05:05:30.925049] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.220 [2024-04-27 05:05:30.925165] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:01.220 05:05:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.220 05:05:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:01.481 05:05:31 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:01.481 05:05:31 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:01.481 05:05:31 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@12 -- # local i 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.481 05:05:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:01.740 /dev/nbd0 00:23:01.740 05:05:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.740 05:05:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.740 05:05:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:01.740 05:05:31 -- common/autotest_common.sh@857 -- # local i 00:23:01.740 05:05:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:01.740 05:05:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:01.740 05:05:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:01.740 05:05:31 -- common/autotest_common.sh@861 -- # break 00:23:01.740 05:05:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:01.740 05:05:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:01.740 05:05:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.740 1+0 records in 00:23:01.740 1+0 records out 00:23:01.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830774 s, 4.9 MB/s 00:23:01.740 05:05:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.740 05:05:31 -- common/autotest_common.sh@874 -- # size=4096 00:23:01.740 05:05:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.740 05:05:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:01.740 05:05:31 -- common/autotest_common.sh@877 -- # return 0 00:23:01.740 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.740 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.740 05:05:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:01.998 /dev/nbd1 00:23:01.998 05:05:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:01.998 05:05:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:01.998 05:05:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:01.998 05:05:31 -- common/autotest_common.sh@857 -- # local i 00:23:01.998 05:05:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:01.998 05:05:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:01.998 05:05:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:01.998 05:05:31 -- common/autotest_common.sh@861 -- # break 00:23:01.998 05:05:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:01.998 05:05:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:01.998 05:05:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.999 1+0 records in 00:23:01.999 1+0 records out 00:23:01.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644201 s, 6.4 MB/s 00:23:01.999 05:05:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.999 05:05:31 -- common/autotest_common.sh@874 -- # size=4096 00:23:01.999 05:05:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.258 05:05:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:02.258 05:05:31 -- common/autotest_common.sh@877 -- # return 0 00:23:02.258 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.258 05:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.258 05:05:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:02.258 05:05:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@51 -- # local i 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.258 05:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@41 -- # break 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.516 05:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@41 -- # break 00:23:02.775 05:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.775 05:05:32 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:02.775 05:05:32 -- bdev/bdev_raid.sh@709 -- # killprocess 136957 00:23:02.775 05:05:32 -- common/autotest_common.sh@926 -- # '[' -z 136957 ']' 00:23:02.775 05:05:32 -- common/autotest_common.sh@930 -- # kill -0 136957 00:23:02.775 05:05:32 -- common/autotest_common.sh@931 -- # uname 00:23:02.775 05:05:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:02.775 05:05:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136957 00:23:02.775 05:05:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:02.775 05:05:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:02.775 05:05:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136957' 00:23:02.775 killing process with pid 136957 00:23:02.775 05:05:32 -- common/autotest_common.sh@945 -- # kill 136957 00:23:02.775 Received shutdown signal, test time was about 60.000000 seconds 00:23:02.775 00:23:02.775 Latency(us) 00:23:02.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.775 =================================================================================================================== 00:23:02.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.775 05:05:32 -- common/autotest_common.sh@950 -- # wait 136957 00:23:02.775 [2024-04-27 05:05:32.642086] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.033 [2024-04-27 05:05:32.755857] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:03.292 ************************************ 00:23:03.292 END TEST raid_rebuild_test 00:23:03.292 ************************************ 00:23:03.292 05:05:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:03.292 00:23:03.292 real 0m23.456s 00:23:03.292 user 0m33.133s 00:23:03.292 sys 0m4.257s 00:23:03.292 05:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:03.292 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:23:03.550 05:05:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:03.550 05:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:03.550 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.550 ************************************ 00:23:03.550 START TEST raid_rebuild_test_sb 00:23:03.550 ************************************ 00:23:03.550 05:05:33 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:03.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=137516 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137516 /var/tmp/spdk-raid.sock 00:23:03.550 05:05:33 -- common/autotest_common.sh@819 -- # '[' -z 137516 ']' 00:23:03.550 05:05:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:03.550 05:05:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:03.550 05:05:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:03.550 05:05:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:03.550 05:05:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:03.550 05:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.550 [2024-04-27 05:05:33.313742] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:03.550 [2024-04-27 05:05:33.314297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137516 ] 00:23:03.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:03.550 Zero copy mechanism will not be used. 00:23:03.808 [2024-04-27 05:05:33.488435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.808 [2024-04-27 05:05:33.626480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.066 [2024-04-27 05:05:33.720471] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.632 05:05:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:04.632 05:05:34 -- common/autotest_common.sh@852 -- # return 0 00:23:04.632 05:05:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:04.632 05:05:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:04.632 05:05:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:04.891 BaseBdev1_malloc 00:23:04.891 05:05:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:05.149 [2024-04-27 05:05:34.840078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:05.149 [2024-04-27 05:05:34.840525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.149 [2024-04-27 05:05:34.840769] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:05.149 [2024-04-27 05:05:34.840949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.149 [2024-04-27 05:05:34.844132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.149 [2024-04-27 05:05:34.844316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:05.149 BaseBdev1 00:23:05.149 05:05:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:05.149 05:05:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:05.149 05:05:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:05.407 BaseBdev2_malloc 00:23:05.408 05:05:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:05.666 [2024-04-27 05:05:35.342501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:05.666 [2024-04-27 05:05:35.342932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.666 [2024-04-27 05:05:35.343050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:05.666 [2024-04-27 05:05:35.343345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.666 [2024-04-27 05:05:35.346304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.666 [2024-04-27 05:05:35.346490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:05.666 BaseBdev2 00:23:05.666 05:05:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:05.666 05:05:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:05.666 05:05:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:05.925 BaseBdev3_malloc 00:23:05.925 05:05:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:06.185 [2024-04-27 05:05:35.900988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:06.185 [2024-04-27 05:05:35.901385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.185 [2024-04-27 05:05:35.901487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:06.185 [2024-04-27 05:05:35.901762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.185 [2024-04-27 05:05:35.904638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.185 [2024-04-27 05:05:35.904817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:06.185 BaseBdev3 00:23:06.185 05:05:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:06.185 05:05:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:06.185 05:05:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:06.443 BaseBdev4_malloc 00:23:06.443 05:05:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:06.701 [2024-04-27 05:05:36.397755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:06.701 [2024-04-27 05:05:36.398183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.701 [2024-04-27 05:05:36.398278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:06.701 [2024-04-27 05:05:36.398559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.701 [2024-04-27 05:05:36.401475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.701 [2024-04-27 05:05:36.401659] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:06.701 BaseBdev4 00:23:06.701 05:05:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:06.959 spare_malloc 00:23:06.959 05:05:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:07.217 spare_delay 00:23:07.217 05:05:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:07.475 [2024-04-27 05:05:37.162564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:07.475 [2024-04-27 05:05:37.162997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.475 [2024-04-27 05:05:37.163104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:07.475 [2024-04-27 05:05:37.163284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.475 [2024-04-27 05:05:37.166382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.475 [2024-04-27 05:05:37.166573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:07.475 spare 00:23:07.475 05:05:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:07.733 [2024-04-27 05:05:37.403147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.733 [2024-04-27 05:05:37.405939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.733 [2024-04-27 05:05:37.406193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.733 [2024-04-27 05:05:37.406392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:07.733 [2024-04-27 05:05:37.406801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:07.733 [2024-04-27 05:05:37.406931] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:07.733 [2024-04-27 05:05:37.407191] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:07.733 [2024-04-27 05:05:37.407789] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:07.733 [2024-04-27 05:05:37.407913] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:07.733 [2024-04-27 05:05:37.408295] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.733 05:05:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.990 05:05:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.990 "name": "raid_bdev1", 00:23:07.990 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:07.990 "strip_size_kb": 0, 00:23:07.990 "state": "online", 00:23:07.990 "raid_level": "raid1", 00:23:07.991 "superblock": true, 00:23:07.991 "num_base_bdevs": 4, 00:23:07.991 "num_base_bdevs_discovered": 4, 00:23:07.991 "num_base_bdevs_operational": 4, 00:23:07.991 "base_bdevs_list": [ 00:23:07.991 { 00:23:07.991 "name": "BaseBdev1", 00:23:07.991 "uuid": "a22bd46f-e076-5276-9a65-e5dd178eb55d", 00:23:07.991 "is_configured": true, 00:23:07.991 "data_offset": 2048, 00:23:07.991 "data_size": 63488 00:23:07.991 }, 00:23:07.991 { 00:23:07.991 "name": "BaseBdev2", 00:23:07.991 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:07.991 "is_configured": true, 00:23:07.991 "data_offset": 2048, 00:23:07.991 "data_size": 63488 00:23:07.991 }, 00:23:07.991 { 00:23:07.991 "name": "BaseBdev3", 00:23:07.991 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:07.991 "is_configured": true, 00:23:07.991 "data_offset": 2048, 00:23:07.991 "data_size": 63488 00:23:07.991 }, 00:23:07.991 { 00:23:07.991 "name": "BaseBdev4", 00:23:07.991 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:07.991 "is_configured": true, 00:23:07.991 "data_offset": 2048, 00:23:07.991 "data_size": 63488 00:23:07.991 } 00:23:07.991 ] 00:23:07.991 }' 00:23:07.991 05:05:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.991 05:05:37 -- common/autotest_common.sh@10 -- # set +x 00:23:08.557 05:05:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:08.557 05:05:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:08.832 [2024-04-27 05:05:38.488885] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.832 05:05:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:08.832 05:05:38 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:08.832 05:05:38 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.090 05:05:38 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:09.090 05:05:38 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:09.090 05:05:38 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:09.090 05:05:38 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@12 -- # local i 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.091 05:05:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:09.349 [2024-04-27 05:05:39.008817] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:09.349 /dev/nbd0 00:23:09.349 05:05:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.349 05:05:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.349 05:05:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:09.349 05:05:39 -- common/autotest_common.sh@857 -- # local i 00:23:09.349 05:05:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:09.349 05:05:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:09.349 05:05:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:09.349 05:05:39 -- common/autotest_common.sh@861 -- # break 00:23:09.349 05:05:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:09.349 05:05:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:09.349 05:05:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.349 1+0 records in 00:23:09.349 1+0 records out 00:23:09.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530318 s, 7.7 MB/s 00:23:09.349 05:05:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.349 05:05:39 -- common/autotest_common.sh@874 -- # size=4096 00:23:09.349 05:05:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.349 05:05:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:09.349 05:05:39 -- common/autotest_common.sh@877 -- # return 0 00:23:09.349 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.349 05:05:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.349 05:05:39 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:09.349 05:05:39 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:09.349 05:05:39 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:15.903 63488+0 records in 00:23:15.903 63488+0 records out 00:23:15.903 32505856 bytes (33 MB, 31 MiB) copied, 6.45992 s, 5.0 MB/s 00:23:15.903 05:05:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@51 -- # local i 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.903 05:05:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:16.161 [2024-04-27 05:05:45.817023] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@41 -- # break 00:23:16.161 05:05:45 -- bdev/nbd_common.sh@45 -- # return 0 00:23:16.161 05:05:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:16.161 [2024-04-27 05:05:46.068425] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.421 05:05:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.682 05:05:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.682 "name": "raid_bdev1", 00:23:16.682 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:16.682 "strip_size_kb": 0, 00:23:16.682 "state": "online", 00:23:16.682 "raid_level": "raid1", 00:23:16.682 "superblock": true, 00:23:16.682 "num_base_bdevs": 4, 00:23:16.682 "num_base_bdevs_discovered": 3, 00:23:16.682 "num_base_bdevs_operational": 3, 00:23:16.682 "base_bdevs_list": [ 00:23:16.682 { 00:23:16.682 "name": null, 00:23:16.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.682 "is_configured": false, 00:23:16.682 "data_offset": 2048, 00:23:16.682 "data_size": 63488 00:23:16.682 }, 00:23:16.682 { 00:23:16.682 "name": "BaseBdev2", 00:23:16.682 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:16.682 "is_configured": true, 00:23:16.682 "data_offset": 2048, 00:23:16.682 "data_size": 63488 00:23:16.682 }, 00:23:16.682 { 00:23:16.682 "name": "BaseBdev3", 00:23:16.682 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:16.682 "is_configured": true, 00:23:16.682 "data_offset": 2048, 00:23:16.682 "data_size": 63488 00:23:16.682 }, 00:23:16.682 { 00:23:16.682 "name": "BaseBdev4", 00:23:16.682 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:16.682 "is_configured": true, 00:23:16.682 "data_offset": 2048, 00:23:16.682 "data_size": 63488 00:23:16.682 } 00:23:16.682 ] 00:23:16.682 }' 00:23:16.682 05:05:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.682 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:23:17.248 05:05:47 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.506 [2024-04-27 05:05:47.260697] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:17.506 [2024-04-27 05:05:47.261059] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.506 [2024-04-27 05:05:47.267005] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:23:17.506 [2024-04-27 05:05:47.269929] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.506 05:05:47 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.439 05:05:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.696 05:05:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.696 "name": "raid_bdev1", 00:23:18.696 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:18.696 "strip_size_kb": 0, 00:23:18.696 "state": "online", 00:23:18.696 "raid_level": "raid1", 00:23:18.696 "superblock": true, 00:23:18.696 "num_base_bdevs": 4, 00:23:18.696 "num_base_bdevs_discovered": 4, 00:23:18.696 "num_base_bdevs_operational": 4, 00:23:18.696 "process": { 00:23:18.696 "type": "rebuild", 00:23:18.696 "target": "spare", 00:23:18.696 "progress": { 00:23:18.696 "blocks": 24576, 00:23:18.696 "percent": 38 00:23:18.696 } 00:23:18.696 }, 00:23:18.696 "base_bdevs_list": [ 00:23:18.696 { 00:23:18.696 "name": "spare", 00:23:18.696 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:18.696 "is_configured": true, 00:23:18.696 "data_offset": 2048, 00:23:18.696 "data_size": 63488 00:23:18.696 }, 00:23:18.696 { 00:23:18.696 "name": "BaseBdev2", 00:23:18.696 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:18.696 "is_configured": true, 00:23:18.696 "data_offset": 2048, 00:23:18.696 "data_size": 63488 00:23:18.696 }, 00:23:18.696 { 00:23:18.696 "name": "BaseBdev3", 00:23:18.696 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:18.696 "is_configured": true, 00:23:18.696 "data_offset": 2048, 00:23:18.696 "data_size": 63488 00:23:18.696 }, 00:23:18.696 { 00:23:18.696 "name": "BaseBdev4", 00:23:18.696 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:18.696 "is_configured": true, 00:23:18.696 "data_offset": 2048, 00:23:18.696 "data_size": 63488 00:23:18.696 } 00:23:18.696 ] 00:23:18.696 }' 00:23:18.696 05:05:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.954 05:05:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.954 05:05:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.954 05:05:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.954 05:05:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:19.212 [2024-04-27 05:05:48.912358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.212 [2024-04-27 05:05:48.986014] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:19.212 [2024-04-27 05:05:48.986387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.212 05:05:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.471 05:05:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.471 "name": "raid_bdev1", 00:23:19.471 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:19.471 "strip_size_kb": 0, 00:23:19.471 "state": "online", 00:23:19.471 "raid_level": "raid1", 00:23:19.471 "superblock": true, 00:23:19.471 "num_base_bdevs": 4, 00:23:19.471 "num_base_bdevs_discovered": 3, 00:23:19.471 "num_base_bdevs_operational": 3, 00:23:19.471 "base_bdevs_list": [ 00:23:19.471 { 00:23:19.471 "name": null, 00:23:19.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.471 "is_configured": false, 00:23:19.471 "data_offset": 2048, 00:23:19.471 "data_size": 63488 00:23:19.471 }, 00:23:19.471 { 00:23:19.471 "name": "BaseBdev2", 00:23:19.471 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:19.471 "is_configured": true, 00:23:19.471 "data_offset": 2048, 00:23:19.471 "data_size": 63488 00:23:19.471 }, 00:23:19.471 { 00:23:19.471 "name": "BaseBdev3", 00:23:19.471 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:19.471 "is_configured": true, 00:23:19.471 "data_offset": 2048, 00:23:19.471 "data_size": 63488 00:23:19.471 }, 00:23:19.471 { 00:23:19.471 "name": "BaseBdev4", 00:23:19.471 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:19.471 "is_configured": true, 00:23:19.471 "data_offset": 2048, 00:23:19.471 "data_size": 63488 00:23:19.471 } 00:23:19.471 ] 00:23:19.471 }' 00:23:19.471 05:05:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.471 05:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.038 05:05:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.295 05:05:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:20.295 "name": "raid_bdev1", 00:23:20.295 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:20.295 "strip_size_kb": 0, 00:23:20.295 "state": "online", 00:23:20.295 "raid_level": "raid1", 00:23:20.295 "superblock": true, 00:23:20.295 "num_base_bdevs": 4, 00:23:20.295 "num_base_bdevs_discovered": 3, 00:23:20.295 "num_base_bdevs_operational": 3, 00:23:20.295 "base_bdevs_list": [ 00:23:20.295 { 00:23:20.295 "name": null, 00:23:20.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.296 "is_configured": false, 00:23:20.296 "data_offset": 2048, 00:23:20.296 "data_size": 63488 00:23:20.296 }, 00:23:20.296 { 00:23:20.296 "name": "BaseBdev2", 00:23:20.296 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:20.296 "is_configured": true, 00:23:20.296 "data_offset": 2048, 00:23:20.296 "data_size": 63488 00:23:20.296 }, 00:23:20.296 { 00:23:20.296 "name": "BaseBdev3", 00:23:20.296 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:20.296 "is_configured": true, 00:23:20.296 "data_offset": 2048, 00:23:20.296 "data_size": 63488 00:23:20.296 }, 00:23:20.296 { 00:23:20.296 "name": "BaseBdev4", 00:23:20.296 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:20.296 "is_configured": true, 00:23:20.296 "data_offset": 2048, 00:23:20.296 "data_size": 63488 00:23:20.296 } 00:23:20.296 ] 00:23:20.296 }' 00:23:20.296 05:05:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.554 05:05:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:20.554 05:05:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.554 05:05:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:20.554 05:05:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.812 [2024-04-27 05:05:50.481854] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:20.812 [2024-04-27 05:05:50.482227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.812 [2024-04-27 05:05:50.487986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:23:20.812 [2024-04-27 05:05:50.490701] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.812 05:05:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.747 05:05:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.005 "name": "raid_bdev1", 00:23:22.005 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:22.005 "strip_size_kb": 0, 00:23:22.005 "state": "online", 00:23:22.005 "raid_level": "raid1", 00:23:22.005 "superblock": true, 00:23:22.005 "num_base_bdevs": 4, 00:23:22.005 "num_base_bdevs_discovered": 4, 00:23:22.005 "num_base_bdevs_operational": 4, 00:23:22.005 "process": { 00:23:22.005 "type": "rebuild", 00:23:22.005 "target": "spare", 00:23:22.005 "progress": { 00:23:22.005 "blocks": 24576, 00:23:22.005 "percent": 38 00:23:22.005 } 00:23:22.005 }, 00:23:22.005 "base_bdevs_list": [ 00:23:22.005 { 00:23:22.005 "name": "spare", 00:23:22.005 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:22.005 "is_configured": true, 00:23:22.005 "data_offset": 2048, 00:23:22.005 "data_size": 63488 00:23:22.005 }, 00:23:22.005 { 00:23:22.005 "name": "BaseBdev2", 00:23:22.005 "uuid": "cbf473d5-f5e2-5864-8cfc-d9fd13e7d384", 00:23:22.005 "is_configured": true, 00:23:22.005 "data_offset": 2048, 00:23:22.005 "data_size": 63488 00:23:22.005 }, 00:23:22.005 { 00:23:22.005 "name": "BaseBdev3", 00:23:22.005 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:22.005 "is_configured": true, 00:23:22.005 "data_offset": 2048, 00:23:22.005 "data_size": 63488 00:23:22.005 }, 00:23:22.005 { 00:23:22.005 "name": "BaseBdev4", 00:23:22.005 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:22.005 "is_configured": true, 00:23:22.005 "data_offset": 2048, 00:23:22.005 "data_size": 63488 00:23:22.005 } 00:23:22.005 ] 00:23:22.005 }' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:22.005 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:22.005 05:05:51 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:22.264 [2024-04-27 05:05:52.104636] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:22.522 [2024-04-27 05:05:52.204386] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.522 05:05:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.780 "name": "raid_bdev1", 00:23:22.780 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:22.780 "strip_size_kb": 0, 00:23:22.780 "state": "online", 00:23:22.780 "raid_level": "raid1", 00:23:22.780 "superblock": true, 00:23:22.780 "num_base_bdevs": 4, 00:23:22.780 "num_base_bdevs_discovered": 3, 00:23:22.780 "num_base_bdevs_operational": 3, 00:23:22.780 "process": { 00:23:22.780 "type": "rebuild", 00:23:22.780 "target": "spare", 00:23:22.780 "progress": { 00:23:22.780 "blocks": 40960, 00:23:22.780 "percent": 64 00:23:22.780 } 00:23:22.780 }, 00:23:22.780 "base_bdevs_list": [ 00:23:22.780 { 00:23:22.780 "name": "spare", 00:23:22.780 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:22.780 "is_configured": true, 00:23:22.780 "data_offset": 2048, 00:23:22.780 "data_size": 63488 00:23:22.780 }, 00:23:22.780 { 00:23:22.780 "name": null, 00:23:22.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.780 "is_configured": false, 00:23:22.780 "data_offset": 2048, 00:23:22.780 "data_size": 63488 00:23:22.780 }, 00:23:22.780 { 00:23:22.780 "name": "BaseBdev3", 00:23:22.780 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:22.780 "is_configured": true, 00:23:22.780 "data_offset": 2048, 00:23:22.780 "data_size": 63488 00:23:22.780 }, 00:23:22.780 { 00:23:22.780 "name": "BaseBdev4", 00:23:22.780 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:22.780 "is_configured": true, 00:23:22.780 "data_offset": 2048, 00:23:22.780 "data_size": 63488 00:23:22.780 } 00:23:22.780 ] 00:23:22.780 }' 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@657 -- # local timeout=518 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.780 05:05:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.346 05:05:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.346 "name": "raid_bdev1", 00:23:23.346 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:23.346 "strip_size_kb": 0, 00:23:23.346 "state": "online", 00:23:23.346 "raid_level": "raid1", 00:23:23.346 "superblock": true, 00:23:23.346 "num_base_bdevs": 4, 00:23:23.346 "num_base_bdevs_discovered": 3, 00:23:23.346 "num_base_bdevs_operational": 3, 00:23:23.346 "process": { 00:23:23.346 "type": "rebuild", 00:23:23.346 "target": "spare", 00:23:23.346 "progress": { 00:23:23.346 "blocks": 49152, 00:23:23.346 "percent": 77 00:23:23.346 } 00:23:23.346 }, 00:23:23.346 "base_bdevs_list": [ 00:23:23.346 { 00:23:23.346 "name": "spare", 00:23:23.346 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:23.346 "is_configured": true, 00:23:23.346 "data_offset": 2048, 00:23:23.346 "data_size": 63488 00:23:23.346 }, 00:23:23.346 { 00:23:23.347 "name": null, 00:23:23.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.347 "is_configured": false, 00:23:23.347 "data_offset": 2048, 00:23:23.347 "data_size": 63488 00:23:23.347 }, 00:23:23.347 { 00:23:23.347 "name": "BaseBdev3", 00:23:23.347 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:23.347 "is_configured": true, 00:23:23.347 "data_offset": 2048, 00:23:23.347 "data_size": 63488 00:23:23.347 }, 00:23:23.347 { 00:23:23.347 "name": "BaseBdev4", 00:23:23.347 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:23.347 "is_configured": true, 00:23:23.347 "data_offset": 2048, 00:23:23.347 "data_size": 63488 00:23:23.347 } 00:23:23.347 ] 00:23:23.347 }' 00:23:23.347 05:05:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.347 05:05:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.347 05:05:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.347 05:05:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.347 05:05:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:23.912 [2024-04-27 05:05:53.617290] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:23.912 [2024-04-27 05:05:53.617725] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:23.912 [2024-04-27 05:05:53.618110] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.169 05:05:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.456 05:05:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.456 "name": "raid_bdev1", 00:23:24.456 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:24.456 "strip_size_kb": 0, 00:23:24.456 "state": "online", 00:23:24.456 "raid_level": "raid1", 00:23:24.456 "superblock": true, 00:23:24.456 "num_base_bdevs": 4, 00:23:24.456 "num_base_bdevs_discovered": 3, 00:23:24.456 "num_base_bdevs_operational": 3, 00:23:24.456 "base_bdevs_list": [ 00:23:24.456 { 00:23:24.456 "name": "spare", 00:23:24.456 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:24.456 "is_configured": true, 00:23:24.456 "data_offset": 2048, 00:23:24.456 "data_size": 63488 00:23:24.456 }, 00:23:24.456 { 00:23:24.456 "name": null, 00:23:24.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.456 "is_configured": false, 00:23:24.456 "data_offset": 2048, 00:23:24.456 "data_size": 63488 00:23:24.456 }, 00:23:24.456 { 00:23:24.456 "name": "BaseBdev3", 00:23:24.456 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:24.456 "is_configured": true, 00:23:24.456 "data_offset": 2048, 00:23:24.456 "data_size": 63488 00:23:24.456 }, 00:23:24.456 { 00:23:24.456 "name": "BaseBdev4", 00:23:24.456 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:24.456 "is_configured": true, 00:23:24.456 "data_offset": 2048, 00:23:24.456 "data_size": 63488 00:23:24.456 } 00:23:24.456 ] 00:23:24.456 }' 00:23:24.456 05:05:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@660 -- # break 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.725 05:05:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.984 "name": "raid_bdev1", 00:23:24.984 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:24.984 "strip_size_kb": 0, 00:23:24.984 "state": "online", 00:23:24.984 "raid_level": "raid1", 00:23:24.984 "superblock": true, 00:23:24.984 "num_base_bdevs": 4, 00:23:24.984 "num_base_bdevs_discovered": 3, 00:23:24.984 "num_base_bdevs_operational": 3, 00:23:24.984 "base_bdevs_list": [ 00:23:24.984 { 00:23:24.984 "name": "spare", 00:23:24.984 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:24.984 "is_configured": true, 00:23:24.984 "data_offset": 2048, 00:23:24.984 "data_size": 63488 00:23:24.984 }, 00:23:24.984 { 00:23:24.984 "name": null, 00:23:24.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.984 "is_configured": false, 00:23:24.984 "data_offset": 2048, 00:23:24.984 "data_size": 63488 00:23:24.984 }, 00:23:24.984 { 00:23:24.984 "name": "BaseBdev3", 00:23:24.984 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:24.984 "is_configured": true, 00:23:24.984 "data_offset": 2048, 00:23:24.984 "data_size": 63488 00:23:24.984 }, 00:23:24.984 { 00:23:24.984 "name": "BaseBdev4", 00:23:24.984 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:24.984 "is_configured": true, 00:23:24.984 "data_offset": 2048, 00:23:24.984 "data_size": 63488 00:23:24.984 } 00:23:24.984 ] 00:23:24.984 }' 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.984 05:05:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.242 05:05:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.242 "name": "raid_bdev1", 00:23:25.242 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:25.242 "strip_size_kb": 0, 00:23:25.242 "state": "online", 00:23:25.242 "raid_level": "raid1", 00:23:25.242 "superblock": true, 00:23:25.242 "num_base_bdevs": 4, 00:23:25.242 "num_base_bdevs_discovered": 3, 00:23:25.242 "num_base_bdevs_operational": 3, 00:23:25.242 "base_bdevs_list": [ 00:23:25.242 { 00:23:25.242 "name": "spare", 00:23:25.242 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:25.242 "is_configured": true, 00:23:25.242 "data_offset": 2048, 00:23:25.242 "data_size": 63488 00:23:25.242 }, 00:23:25.242 { 00:23:25.242 "name": null, 00:23:25.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.242 "is_configured": false, 00:23:25.242 "data_offset": 2048, 00:23:25.242 "data_size": 63488 00:23:25.242 }, 00:23:25.242 { 00:23:25.242 "name": "BaseBdev3", 00:23:25.242 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:25.242 "is_configured": true, 00:23:25.242 "data_offset": 2048, 00:23:25.242 "data_size": 63488 00:23:25.242 }, 00:23:25.242 { 00:23:25.242 "name": "BaseBdev4", 00:23:25.242 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:25.242 "is_configured": true, 00:23:25.242 "data_offset": 2048, 00:23:25.242 "data_size": 63488 00:23:25.242 } 00:23:25.242 ] 00:23:25.242 }' 00:23:25.242 05:05:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.242 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.808 05:05:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:26.066 [2024-04-27 05:05:55.944202] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.066 [2024-04-27 05:05:55.944571] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.066 [2024-04-27 05:05:55.944840] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.066 [2024-04-27 05:05:55.945068] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.066 [2024-04-27 05:05:55.945195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:26.066 05:05:55 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:26.066 05:05:55 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.632 05:05:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:26.632 05:05:56 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:26.632 05:05:56 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@12 -- # local i 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.632 05:05:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:26.632 /dev/nbd0 00:23:26.891 05:05:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:26.891 05:05:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:26.891 05:05:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:26.891 05:05:56 -- common/autotest_common.sh@857 -- # local i 00:23:26.891 05:05:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:26.891 05:05:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:26.891 05:05:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:26.891 05:05:56 -- common/autotest_common.sh@861 -- # break 00:23:26.891 05:05:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:26.891 05:05:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:26.891 05:05:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:26.891 1+0 records in 00:23:26.891 1+0 records out 00:23:26.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065556 s, 6.2 MB/s 00:23:26.891 05:05:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.891 05:05:56 -- common/autotest_common.sh@874 -- # size=4096 00:23:26.891 05:05:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.891 05:05:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:26.891 05:05:56 -- common/autotest_common.sh@877 -- # return 0 00:23:26.891 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:26.891 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.891 05:05:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:27.150 /dev/nbd1 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:27.150 05:05:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:27.150 05:05:56 -- common/autotest_common.sh@857 -- # local i 00:23:27.150 05:05:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:27.150 05:05:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:27.150 05:05:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:27.150 05:05:56 -- common/autotest_common.sh@861 -- # break 00:23:27.150 05:05:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:27.150 05:05:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:27.150 05:05:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.150 1+0 records in 00:23:27.150 1+0 records out 00:23:27.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599199 s, 6.8 MB/s 00:23:27.150 05:05:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.150 05:05:56 -- common/autotest_common.sh@874 -- # size=4096 00:23:27.150 05:05:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.150 05:05:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:27.150 05:05:56 -- common/autotest_common.sh@877 -- # return 0 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.150 05:05:56 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:27.150 05:05:56 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@51 -- # local i 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.150 05:05:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@41 -- # break 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.408 05:05:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@41 -- # break 00:23:27.666 05:05:57 -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.666 05:05:57 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:27.666 05:05:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:27.666 05:05:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:27.666 05:05:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:27.924 05:05:57 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:28.182 [2024-04-27 05:05:58.042586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:28.182 [2024-04-27 05:05:58.043022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.182 [2024-04-27 05:05:58.043135] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:28.182 [2024-04-27 05:05:58.043407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.182 [2024-04-27 05:05:58.046368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.182 [2024-04-27 05:05:58.046572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:28.182 [2024-04-27 05:05:58.046864] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:28.182 [2024-04-27 05:05:58.047052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.182 BaseBdev1 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@696 -- # continue 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:28.182 05:05:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:28.440 05:05:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:28.699 [2024-04-27 05:05:58.567144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:28.699 [2024-04-27 05:05:58.567558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.699 [2024-04-27 05:05:58.567768] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:28.699 [2024-04-27 05:05:58.567904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.699 [2024-04-27 05:05:58.568492] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.699 [2024-04-27 05:05:58.568713] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:28.699 [2024-04-27 05:05:58.568958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:28.699 [2024-04-27 05:05:58.569085] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:28.699 [2024-04-27 05:05:58.569191] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.699 [2024-04-27 05:05:58.569257] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:28.699 [2024-04-27 05:05:58.569472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.699 BaseBdev3 00:23:28.699 05:05:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.699 05:05:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:28.699 05:05:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:28.958 05:05:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:29.216 [2024-04-27 05:05:59.055268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:29.216 [2024-04-27 05:05:59.055582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.216 [2024-04-27 05:05:59.055753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:29.217 [2024-04-27 05:05:59.055925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.217 [2024-04-27 05:05:59.056598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.217 [2024-04-27 05:05:59.056781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:29.217 [2024-04-27 05:05:59.057013] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:29.217 [2024-04-27 05:05:59.057155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:29.217 BaseBdev4 00:23:29.217 05:05:59 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:29.475 05:05:59 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:29.733 [2024-04-27 05:05:59.535341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:29.733 [2024-04-27 05:05:59.535753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.733 [2024-04-27 05:05:59.535867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:29.733 [2024-04-27 05:05:59.536112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.733 [2024-04-27 05:05:59.536770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.733 [2024-04-27 05:05:59.536962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:29.733 [2024-04-27 05:05:59.537210] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:29.733 [2024-04-27 05:05:59.537362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.733 spare 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.733 05:05:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.733 [2024-04-27 05:05:59.637664] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:29.733 [2024-04-27 05:05:59.637925] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:29.733 [2024-04-27 05:05:59.638221] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:23:29.733 [2024-04-27 05:05:59.638966] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:29.733 [2024-04-27 05:05:59.639089] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:29.733 [2024-04-27 05:05:59.639386] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.991 05:05:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.991 "name": "raid_bdev1", 00:23:29.991 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:29.991 "strip_size_kb": 0, 00:23:29.991 "state": "online", 00:23:29.991 "raid_level": "raid1", 00:23:29.991 "superblock": true, 00:23:29.991 "num_base_bdevs": 4, 00:23:29.991 "num_base_bdevs_discovered": 3, 00:23:29.991 "num_base_bdevs_operational": 3, 00:23:29.991 "base_bdevs_list": [ 00:23:29.991 { 00:23:29.991 "name": "spare", 00:23:29.991 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:29.991 "is_configured": true, 00:23:29.991 "data_offset": 2048, 00:23:29.991 "data_size": 63488 00:23:29.991 }, 00:23:29.991 { 00:23:29.991 "name": null, 00:23:29.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.991 "is_configured": false, 00:23:29.991 "data_offset": 2048, 00:23:29.991 "data_size": 63488 00:23:29.991 }, 00:23:29.991 { 00:23:29.991 "name": "BaseBdev3", 00:23:29.991 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:29.991 "is_configured": true, 00:23:29.991 "data_offset": 2048, 00:23:29.991 "data_size": 63488 00:23:29.991 }, 00:23:29.991 { 00:23:29.991 "name": "BaseBdev4", 00:23:29.991 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:29.991 "is_configured": true, 00:23:29.991 "data_offset": 2048, 00:23:29.991 "data_size": 63488 00:23:29.991 } 00:23:29.991 ] 00:23:29.991 }' 00:23:29.991 05:05:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.991 05:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.557 05:06:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.816 05:06:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.816 "name": "raid_bdev1", 00:23:30.816 "uuid": "ffd92e38-9a17-4c99-a468-36101de99301", 00:23:30.816 "strip_size_kb": 0, 00:23:30.816 "state": "online", 00:23:30.816 "raid_level": "raid1", 00:23:30.816 "superblock": true, 00:23:30.816 "num_base_bdevs": 4, 00:23:30.816 "num_base_bdevs_discovered": 3, 00:23:30.816 "num_base_bdevs_operational": 3, 00:23:30.816 "base_bdevs_list": [ 00:23:30.816 { 00:23:30.816 "name": "spare", 00:23:30.816 "uuid": "f5540efb-0a07-52e2-9e47-bd0f76546737", 00:23:30.816 "is_configured": true, 00:23:30.816 "data_offset": 2048, 00:23:30.816 "data_size": 63488 00:23:30.816 }, 00:23:30.816 { 00:23:30.816 "name": null, 00:23:30.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.816 "is_configured": false, 00:23:30.816 "data_offset": 2048, 00:23:30.816 "data_size": 63488 00:23:30.816 }, 00:23:30.816 { 00:23:30.816 "name": "BaseBdev3", 00:23:30.816 "uuid": "cf0ed9b5-d1f8-5332-8085-badfa0b12c38", 00:23:30.816 "is_configured": true, 00:23:30.816 "data_offset": 2048, 00:23:30.816 "data_size": 63488 00:23:30.816 }, 00:23:30.816 { 00:23:30.816 "name": "BaseBdev4", 00:23:30.816 "uuid": "d30a2737-5f7e-55df-9d14-0909d976e2c6", 00:23:30.816 "is_configured": true, 00:23:30.816 "data_offset": 2048, 00:23:30.816 "data_size": 63488 00:23:30.816 } 00:23:30.816 ] 00:23:30.816 }' 00:23:30.816 05:06:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.816 05:06:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.816 05:06:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.074 05:06:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:31.074 05:06:00 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.074 05:06:00 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:31.332 05:06:01 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.332 05:06:01 -- bdev/bdev_raid.sh@709 -- # killprocess 137516 00:23:31.332 05:06:01 -- common/autotest_common.sh@926 -- # '[' -z 137516 ']' 00:23:31.332 05:06:01 -- common/autotest_common.sh@930 -- # kill -0 137516 00:23:31.332 05:06:01 -- common/autotest_common.sh@931 -- # uname 00:23:31.332 05:06:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:31.332 05:06:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137516 00:23:31.332 05:06:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:31.332 05:06:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:31.332 05:06:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137516' 00:23:31.332 killing process with pid 137516 00:23:31.332 05:06:01 -- common/autotest_common.sh@945 -- # kill 137516 00:23:31.332 Received shutdown signal, test time was about 60.000000 seconds 00:23:31.332 00:23:31.332 Latency(us) 00:23:31.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.332 =================================================================================================================== 00:23:31.332 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.332 05:06:01 -- common/autotest_common.sh@950 -- # wait 137516 00:23:31.332 [2024-04-27 05:06:01.034838] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:31.332 [2024-04-27 05:06:01.034969] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.332 [2024-04-27 05:06:01.035079] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.332 [2024-04-27 05:06:01.035210] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:23:31.332 [2024-04-27 05:06:01.137198] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:31.898 00:23:31.898 real 0m28.298s 00:23:31.898 user 0m41.511s 00:23:31.898 sys 0m4.610s 00:23:31.898 05:06:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.898 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.898 ************************************ 00:23:31.898 END TEST raid_rebuild_test_sb 00:23:31.898 ************************************ 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:23:31.898 05:06:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:31.898 05:06:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:31.898 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.898 ************************************ 00:23:31.898 START TEST raid_rebuild_test_io 00:23:31.898 ************************************ 00:23:31.898 05:06:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=138183 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138183 /var/tmp/spdk-raid.sock 00:23:31.898 05:06:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:31.899 05:06:01 -- common/autotest_common.sh@819 -- # '[' -z 138183 ']' 00:23:31.899 05:06:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:31.899 05:06:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:31.899 05:06:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:31.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:31.899 05:06:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:31.899 05:06:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.899 [2024-04-27 05:06:01.670644] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:31.899 [2024-04-27 05:06:01.671148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138183 ] 00:23:31.899 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:31.899 Zero copy mechanism will not be used. 00:23:32.156 [2024-04-27 05:06:01.829769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.157 [2024-04-27 05:06:01.961479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.157 [2024-04-27 05:06:02.049071] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.100 05:06:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:33.100 05:06:02 -- common/autotest_common.sh@852 -- # return 0 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:33.100 BaseBdev1 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.100 05:06:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:33.363 BaseBdev2 00:23:33.363 05:06:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.363 05:06:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.363 05:06:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:33.621 BaseBdev3 00:23:33.621 05:06:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.621 05:06:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.621 05:06:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:33.878 BaseBdev4 00:23:33.878 05:06:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:34.136 spare_malloc 00:23:34.136 05:06:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:34.395 spare_delay 00:23:34.395 05:06:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:34.653 [2024-04-27 05:06:04.499377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:34.653 [2024-04-27 05:06:04.499851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.653 [2024-04-27 05:06:04.499975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:34.653 [2024-04-27 05:06:04.500287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.653 [2024-04-27 05:06:04.503498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.653 [2024-04-27 05:06:04.503699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:34.653 spare 00:23:34.653 05:06:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:34.956 [2024-04-27 05:06:04.744268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.956 [2024-04-27 05:06:04.747079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.956 [2024-04-27 05:06:04.747290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.956 [2024-04-27 05:06:04.747376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:34.956 [2024-04-27 05:06:04.747541] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:23:34.956 [2024-04-27 05:06:04.747590] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:34.956 [2024-04-27 05:06:04.747887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:34.956 [2024-04-27 05:06:04.748524] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:23:34.956 [2024-04-27 05:06:04.748666] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:23:34.956 [2024-04-27 05:06:04.749055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.956 05:06:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.213 05:06:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.213 "name": "raid_bdev1", 00:23:35.213 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:35.213 "strip_size_kb": 0, 00:23:35.213 "state": "online", 00:23:35.213 "raid_level": "raid1", 00:23:35.213 "superblock": false, 00:23:35.213 "num_base_bdevs": 4, 00:23:35.214 "num_base_bdevs_discovered": 4, 00:23:35.214 "num_base_bdevs_operational": 4, 00:23:35.214 "base_bdevs_list": [ 00:23:35.214 { 00:23:35.214 "name": "BaseBdev1", 00:23:35.214 "uuid": "024bb12a-9fb4-4337-80cf-b881d2c934dc", 00:23:35.214 "is_configured": true, 00:23:35.214 "data_offset": 0, 00:23:35.214 "data_size": 65536 00:23:35.214 }, 00:23:35.214 { 00:23:35.214 "name": "BaseBdev2", 00:23:35.214 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:35.214 "is_configured": true, 00:23:35.214 "data_offset": 0, 00:23:35.214 "data_size": 65536 00:23:35.214 }, 00:23:35.214 { 00:23:35.214 "name": "BaseBdev3", 00:23:35.214 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:35.214 "is_configured": true, 00:23:35.214 "data_offset": 0, 00:23:35.214 "data_size": 65536 00:23:35.214 }, 00:23:35.214 { 00:23:35.214 "name": "BaseBdev4", 00:23:35.214 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:35.214 "is_configured": true, 00:23:35.214 "data_offset": 0, 00:23:35.214 "data_size": 65536 00:23:35.214 } 00:23:35.214 ] 00:23:35.214 }' 00:23:35.214 05:06:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.214 05:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:35.780 05:06:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:35.780 05:06:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:36.346 [2024-04-27 05:06:05.953767] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.346 05:06:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:36.346 05:06:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:36.346 05:06:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.346 05:06:06 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:36.346 05:06:06 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:36.346 05:06:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:36.346 05:06:06 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:36.604 [2024-04-27 05:06:06.321067] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:36.604 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:36.604 Zero copy mechanism will not be used. 00:23:36.604 Running I/O for 60 seconds... 00:23:36.604 [2024-04-27 05:06:06.464634] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:36.604 [2024-04-27 05:06:06.475234] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:36.604 05:06:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.862 05:06:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.862 05:06:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.862 05:06:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.862 05:06:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.862 05:06:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.120 05:06:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.120 "name": "raid_bdev1", 00:23:37.120 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:37.120 "strip_size_kb": 0, 00:23:37.120 "state": "online", 00:23:37.120 "raid_level": "raid1", 00:23:37.120 "superblock": false, 00:23:37.120 "num_base_bdevs": 4, 00:23:37.120 "num_base_bdevs_discovered": 3, 00:23:37.120 "num_base_bdevs_operational": 3, 00:23:37.120 "base_bdevs_list": [ 00:23:37.120 { 00:23:37.120 "name": null, 00:23:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.120 "is_configured": false, 00:23:37.120 "data_offset": 0, 00:23:37.120 "data_size": 65536 00:23:37.120 }, 00:23:37.120 { 00:23:37.120 "name": "BaseBdev2", 00:23:37.120 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:37.120 "is_configured": true, 00:23:37.120 "data_offset": 0, 00:23:37.120 "data_size": 65536 00:23:37.120 }, 00:23:37.120 { 00:23:37.120 "name": "BaseBdev3", 00:23:37.120 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:37.120 "is_configured": true, 00:23:37.120 "data_offset": 0, 00:23:37.120 "data_size": 65536 00:23:37.120 }, 00:23:37.120 { 00:23:37.120 "name": "BaseBdev4", 00:23:37.120 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:37.120 "is_configured": true, 00:23:37.120 "data_offset": 0, 00:23:37.120 "data_size": 65536 00:23:37.120 } 00:23:37.120 ] 00:23:37.120 }' 00:23:37.120 05:06:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.120 05:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:37.687 05:06:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:37.945 [2024-04-27 05:06:07.688974] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:37.945 [2024-04-27 05:06:07.689331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.945 [2024-04-27 05:06:07.755555] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:37.945 05:06:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:37.945 [2024-04-27 05:06:07.758526] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:38.204 [2024-04-27 05:06:07.888171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:38.204 [2024-04-27 05:06:07.889363] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:38.204 [2024-04-27 05:06:08.026160] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:38.204 [2024-04-27 05:06:08.027366] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:38.463 [2024-04-27 05:06:08.351894] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:38.463 [2024-04-27 05:06:08.353034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:38.722 [2024-04-27 05:06:08.577626] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:38.722 [2024-04-27 05:06:08.578365] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.981 05:06:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.240 "name": "raid_bdev1", 00:23:39.240 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:39.240 "strip_size_kb": 0, 00:23:39.240 "state": "online", 00:23:39.240 "raid_level": "raid1", 00:23:39.240 "superblock": false, 00:23:39.240 "num_base_bdevs": 4, 00:23:39.240 "num_base_bdevs_discovered": 4, 00:23:39.240 "num_base_bdevs_operational": 4, 00:23:39.240 "process": { 00:23:39.240 "type": "rebuild", 00:23:39.240 "target": "spare", 00:23:39.240 "progress": { 00:23:39.240 "blocks": 14336, 00:23:39.240 "percent": 21 00:23:39.240 } 00:23:39.240 }, 00:23:39.240 "base_bdevs_list": [ 00:23:39.240 { 00:23:39.240 "name": "spare", 00:23:39.240 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:39.240 "is_configured": true, 00:23:39.240 "data_offset": 0, 00:23:39.240 "data_size": 65536 00:23:39.240 }, 00:23:39.240 { 00:23:39.240 "name": "BaseBdev2", 00:23:39.240 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:39.240 "is_configured": true, 00:23:39.240 "data_offset": 0, 00:23:39.240 "data_size": 65536 00:23:39.240 }, 00:23:39.240 { 00:23:39.240 "name": "BaseBdev3", 00:23:39.240 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:39.240 "is_configured": true, 00:23:39.240 "data_offset": 0, 00:23:39.240 "data_size": 65536 00:23:39.240 }, 00:23:39.240 { 00:23:39.240 "name": "BaseBdev4", 00:23:39.240 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:39.240 "is_configured": true, 00:23:39.240 "data_offset": 0, 00:23:39.240 "data_size": 65536 00:23:39.240 } 00:23:39.240 ] 00:23:39.240 }' 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.240 05:06:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:39.499 [2024-04-27 05:06:09.305979] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:39.499 [2024-04-27 05:06:09.381559] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.758 [2024-04-27 05:06:09.420116] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:39.758 [2024-04-27 05:06:09.441275] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.758 [2024-04-27 05:06:09.456450] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.758 [2024-04-27 05:06:09.475784] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.758 05:06:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.017 05:06:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.017 "name": "raid_bdev1", 00:23:40.017 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:40.017 "strip_size_kb": 0, 00:23:40.017 "state": "online", 00:23:40.017 "raid_level": "raid1", 00:23:40.017 "superblock": false, 00:23:40.017 "num_base_bdevs": 4, 00:23:40.017 "num_base_bdevs_discovered": 3, 00:23:40.017 "num_base_bdevs_operational": 3, 00:23:40.017 "base_bdevs_list": [ 00:23:40.017 { 00:23:40.017 "name": null, 00:23:40.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.017 "is_configured": false, 00:23:40.017 "data_offset": 0, 00:23:40.017 "data_size": 65536 00:23:40.017 }, 00:23:40.017 { 00:23:40.017 "name": "BaseBdev2", 00:23:40.017 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:40.017 "is_configured": true, 00:23:40.017 "data_offset": 0, 00:23:40.017 "data_size": 65536 00:23:40.017 }, 00:23:40.017 { 00:23:40.017 "name": "BaseBdev3", 00:23:40.017 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:40.017 "is_configured": true, 00:23:40.017 "data_offset": 0, 00:23:40.017 "data_size": 65536 00:23:40.017 }, 00:23:40.017 { 00:23:40.017 "name": "BaseBdev4", 00:23:40.017 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:40.017 "is_configured": true, 00:23:40.017 "data_offset": 0, 00:23:40.017 "data_size": 65536 00:23:40.017 } 00:23:40.017 ] 00:23:40.017 }' 00:23:40.017 05:06:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.017 05:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.952 "name": "raid_bdev1", 00:23:40.952 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:40.952 "strip_size_kb": 0, 00:23:40.952 "state": "online", 00:23:40.952 "raid_level": "raid1", 00:23:40.952 "superblock": false, 00:23:40.952 "num_base_bdevs": 4, 00:23:40.952 "num_base_bdevs_discovered": 3, 00:23:40.952 "num_base_bdevs_operational": 3, 00:23:40.952 "base_bdevs_list": [ 00:23:40.952 { 00:23:40.952 "name": null, 00:23:40.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.952 "is_configured": false, 00:23:40.952 "data_offset": 0, 00:23:40.952 "data_size": 65536 00:23:40.952 }, 00:23:40.952 { 00:23:40.952 "name": "BaseBdev2", 00:23:40.952 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:40.952 "is_configured": true, 00:23:40.952 "data_offset": 0, 00:23:40.952 "data_size": 65536 00:23:40.952 }, 00:23:40.952 { 00:23:40.952 "name": "BaseBdev3", 00:23:40.952 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:40.952 "is_configured": true, 00:23:40.952 "data_offset": 0, 00:23:40.952 "data_size": 65536 00:23:40.952 }, 00:23:40.952 { 00:23:40.952 "name": "BaseBdev4", 00:23:40.952 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:40.952 "is_configured": true, 00:23:40.952 "data_offset": 0, 00:23:40.952 "data_size": 65536 00:23:40.952 } 00:23:40.952 ] 00:23:40.952 }' 00:23:40.952 05:06:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:41.211 05:06:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:41.211 05:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.211 05:06:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:41.211 05:06:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:41.469 [2024-04-27 05:06:11.197336] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:41.469 [2024-04-27 05:06:11.197698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.469 [2024-04-27 05:06:11.233883] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:41.469 [2024-04-27 05:06:11.236739] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.470 05:06:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:41.728 [2024-04-27 05:06:11.384862] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:41.728 [2024-04-27 05:06:11.385971] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:41.728 [2024-04-27 05:06:11.529790] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:41.986 [2024-04-27 05:06:11.842575] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:42.245 [2024-04-27 05:06:12.055713] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:42.245 [2024-04-27 05:06:12.056969] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.504 05:06:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.763 [2024-04-27 05:06:12.415716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.763 "name": "raid_bdev1", 00:23:42.763 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:42.763 "strip_size_kb": 0, 00:23:42.763 "state": "online", 00:23:42.763 "raid_level": "raid1", 00:23:42.763 "superblock": false, 00:23:42.763 "num_base_bdevs": 4, 00:23:42.763 "num_base_bdevs_discovered": 4, 00:23:42.763 "num_base_bdevs_operational": 4, 00:23:42.763 "process": { 00:23:42.763 "type": "rebuild", 00:23:42.763 "target": "spare", 00:23:42.763 "progress": { 00:23:42.763 "blocks": 14336, 00:23:42.763 "percent": 21 00:23:42.763 } 00:23:42.763 }, 00:23:42.763 "base_bdevs_list": [ 00:23:42.763 { 00:23:42.763 "name": "spare", 00:23:42.763 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:42.763 "is_configured": true, 00:23:42.763 "data_offset": 0, 00:23:42.763 "data_size": 65536 00:23:42.763 }, 00:23:42.763 { 00:23:42.763 "name": "BaseBdev2", 00:23:42.763 "uuid": "83880d58-32b3-4b6a-b4f1-779c580d7142", 00:23:42.763 "is_configured": true, 00:23:42.763 "data_offset": 0, 00:23:42.763 "data_size": 65536 00:23:42.763 }, 00:23:42.763 { 00:23:42.763 "name": "BaseBdev3", 00:23:42.763 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:42.763 "is_configured": true, 00:23:42.763 "data_offset": 0, 00:23:42.763 "data_size": 65536 00:23:42.763 }, 00:23:42.763 { 00:23:42.763 "name": "BaseBdev4", 00:23:42.763 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:42.763 "is_configured": true, 00:23:42.763 "data_offset": 0, 00:23:42.763 "data_size": 65536 00:23:42.763 } 00:23:42.763 ] 00:23:42.763 }' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:42.763 05:06:12 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:42.763 [2024-04-27 05:06:12.640673] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:43.022 [2024-04-27 05:06:12.823261] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:43.280 [2024-04-27 05:06:13.006018] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:23:43.280 [2024-04-27 05:06:13.006399] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.280 05:06:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.280 [2024-04-27 05:06:13.137083] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:43.538 05:06:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.538 "name": "raid_bdev1", 00:23:43.538 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:43.538 "strip_size_kb": 0, 00:23:43.538 "state": "online", 00:23:43.538 "raid_level": "raid1", 00:23:43.538 "superblock": false, 00:23:43.538 "num_base_bdevs": 4, 00:23:43.538 "num_base_bdevs_discovered": 3, 00:23:43.538 "num_base_bdevs_operational": 3, 00:23:43.538 "process": { 00:23:43.538 "type": "rebuild", 00:23:43.538 "target": "spare", 00:23:43.538 "progress": { 00:23:43.538 "blocks": 20480, 00:23:43.538 "percent": 31 00:23:43.538 } 00:23:43.538 }, 00:23:43.538 "base_bdevs_list": [ 00:23:43.538 { 00:23:43.538 "name": "spare", 00:23:43.538 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:43.538 "is_configured": true, 00:23:43.538 "data_offset": 0, 00:23:43.538 "data_size": 65536 00:23:43.538 }, 00:23:43.538 { 00:23:43.538 "name": null, 00:23:43.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.538 "is_configured": false, 00:23:43.539 "data_offset": 0, 00:23:43.539 "data_size": 65536 00:23:43.539 }, 00:23:43.539 { 00:23:43.539 "name": "BaseBdev3", 00:23:43.539 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:43.539 "is_configured": true, 00:23:43.539 "data_offset": 0, 00:23:43.539 "data_size": 65536 00:23:43.539 }, 00:23:43.539 { 00:23:43.539 "name": "BaseBdev4", 00:23:43.539 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:43.539 "is_configured": true, 00:23:43.539 "data_offset": 0, 00:23:43.539 "data_size": 65536 00:23:43.539 } 00:23:43.539 ] 00:23:43.539 }' 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@657 -- # local timeout=539 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.539 05:06:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.797 [2024-04-27 05:06:13.496416] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:43.797 05:06:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.797 "name": "raid_bdev1", 00:23:43.797 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:43.797 "strip_size_kb": 0, 00:23:43.798 "state": "online", 00:23:43.798 "raid_level": "raid1", 00:23:43.798 "superblock": false, 00:23:43.798 "num_base_bdevs": 4, 00:23:43.798 "num_base_bdevs_discovered": 3, 00:23:43.798 "num_base_bdevs_operational": 3, 00:23:43.798 "process": { 00:23:43.798 "type": "rebuild", 00:23:43.798 "target": "spare", 00:23:43.798 "progress": { 00:23:43.798 "blocks": 28672, 00:23:43.798 "percent": 43 00:23:43.798 } 00:23:43.798 }, 00:23:43.798 "base_bdevs_list": [ 00:23:43.798 { 00:23:43.798 "name": "spare", 00:23:43.798 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:43.798 "is_configured": true, 00:23:43.798 "data_offset": 0, 00:23:43.798 "data_size": 65536 00:23:43.798 }, 00:23:43.798 { 00:23:43.798 "name": null, 00:23:43.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.798 "is_configured": false, 00:23:43.798 "data_offset": 0, 00:23:43.798 "data_size": 65536 00:23:43.798 }, 00:23:43.798 { 00:23:43.798 "name": "BaseBdev3", 00:23:43.798 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:43.798 "is_configured": true, 00:23:43.798 "data_offset": 0, 00:23:43.798 "data_size": 65536 00:23:43.798 }, 00:23:43.798 { 00:23:43.798 "name": "BaseBdev4", 00:23:43.798 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:43.798 "is_configured": true, 00:23:43.798 "data_offset": 0, 00:23:43.798 "data_size": 65536 00:23:43.798 } 00:23:43.798 ] 00:23:43.798 }' 00:23:43.798 05:06:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.056 05:06:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.056 05:06:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.056 05:06:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.056 05:06:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:44.622 [2024-04-27 05:06:14.255179] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.190 05:06:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.190 [2024-04-27 05:06:14.861638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:45.190 05:06:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.190 "name": "raid_bdev1", 00:23:45.190 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:45.190 "strip_size_kb": 0, 00:23:45.190 "state": "online", 00:23:45.190 "raid_level": "raid1", 00:23:45.190 "superblock": false, 00:23:45.190 "num_base_bdevs": 4, 00:23:45.190 "num_base_bdevs_discovered": 3, 00:23:45.190 "num_base_bdevs_operational": 3, 00:23:45.190 "process": { 00:23:45.190 "type": "rebuild", 00:23:45.190 "target": "spare", 00:23:45.190 "progress": { 00:23:45.190 "blocks": 51200, 00:23:45.190 "percent": 78 00:23:45.190 } 00:23:45.190 }, 00:23:45.190 "base_bdevs_list": [ 00:23:45.190 { 00:23:45.190 "name": "spare", 00:23:45.190 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:45.190 "is_configured": true, 00:23:45.190 "data_offset": 0, 00:23:45.190 "data_size": 65536 00:23:45.190 }, 00:23:45.190 { 00:23:45.190 "name": null, 00:23:45.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.190 "is_configured": false, 00:23:45.190 "data_offset": 0, 00:23:45.190 "data_size": 65536 00:23:45.190 }, 00:23:45.190 { 00:23:45.190 "name": "BaseBdev3", 00:23:45.190 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:45.190 "is_configured": true, 00:23:45.190 "data_offset": 0, 00:23:45.190 "data_size": 65536 00:23:45.190 }, 00:23:45.190 { 00:23:45.190 "name": "BaseBdev4", 00:23:45.190 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:45.190 "is_configured": true, 00:23:45.190 "data_offset": 0, 00:23:45.190 "data_size": 65536 00:23:45.190 } 00:23:45.190 ] 00:23:45.190 }' 00:23:45.190 05:06:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:45.449 [2024-04-27 05:06:15.100335] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:45.449 05:06:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.449 05:06:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:45.449 05:06:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.449 05:06:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:45.449 [2024-04-27 05:06:15.337639] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:45.708 [2024-04-27 05:06:15.561911] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:46.278 [2024-04-27 05:06:15.915965] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:46.278 [2024-04-27 05:06:16.023590] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:46.278 [2024-04-27 05:06:16.027292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.278 05:06:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.537 05:06:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.537 "name": "raid_bdev1", 00:23:46.537 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:46.537 "strip_size_kb": 0, 00:23:46.537 "state": "online", 00:23:46.537 "raid_level": "raid1", 00:23:46.537 "superblock": false, 00:23:46.537 "num_base_bdevs": 4, 00:23:46.537 "num_base_bdevs_discovered": 3, 00:23:46.537 "num_base_bdevs_operational": 3, 00:23:46.537 "base_bdevs_list": [ 00:23:46.537 { 00:23:46.537 "name": "spare", 00:23:46.537 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:46.537 "is_configured": true, 00:23:46.537 "data_offset": 0, 00:23:46.537 "data_size": 65536 00:23:46.537 }, 00:23:46.537 { 00:23:46.537 "name": null, 00:23:46.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.537 "is_configured": false, 00:23:46.537 "data_offset": 0, 00:23:46.537 "data_size": 65536 00:23:46.537 }, 00:23:46.537 { 00:23:46.538 "name": "BaseBdev3", 00:23:46.538 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:46.538 "is_configured": true, 00:23:46.538 "data_offset": 0, 00:23:46.538 "data_size": 65536 00:23:46.538 }, 00:23:46.538 { 00:23:46.538 "name": "BaseBdev4", 00:23:46.538 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:46.538 "is_configured": true, 00:23:46.538 "data_offset": 0, 00:23:46.538 "data_size": 65536 00:23:46.538 } 00:23:46.538 ] 00:23:46.538 }' 00:23:46.538 05:06:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.796 05:06:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@660 -- # break 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.797 05:06:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.055 "name": "raid_bdev1", 00:23:47.055 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:47.055 "strip_size_kb": 0, 00:23:47.055 "state": "online", 00:23:47.055 "raid_level": "raid1", 00:23:47.055 "superblock": false, 00:23:47.055 "num_base_bdevs": 4, 00:23:47.055 "num_base_bdevs_discovered": 3, 00:23:47.055 "num_base_bdevs_operational": 3, 00:23:47.055 "base_bdevs_list": [ 00:23:47.055 { 00:23:47.055 "name": "spare", 00:23:47.055 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:47.055 "is_configured": true, 00:23:47.055 "data_offset": 0, 00:23:47.055 "data_size": 65536 00:23:47.055 }, 00:23:47.055 { 00:23:47.055 "name": null, 00:23:47.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.055 "is_configured": false, 00:23:47.055 "data_offset": 0, 00:23:47.055 "data_size": 65536 00:23:47.055 }, 00:23:47.055 { 00:23:47.055 "name": "BaseBdev3", 00:23:47.055 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:47.055 "is_configured": true, 00:23:47.055 "data_offset": 0, 00:23:47.055 "data_size": 65536 00:23:47.055 }, 00:23:47.055 { 00:23:47.055 "name": "BaseBdev4", 00:23:47.055 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:47.055 "is_configured": true, 00:23:47.055 "data_offset": 0, 00:23:47.055 "data_size": 65536 00:23:47.055 } 00:23:47.055 ] 00:23:47.055 }' 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.055 05:06:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.056 05:06:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.056 05:06:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.314 05:06:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.314 "name": "raid_bdev1", 00:23:47.314 "uuid": "62cf86cc-2787-406b-8d0f-03e39a99b3b3", 00:23:47.314 "strip_size_kb": 0, 00:23:47.314 "state": "online", 00:23:47.314 "raid_level": "raid1", 00:23:47.314 "superblock": false, 00:23:47.314 "num_base_bdevs": 4, 00:23:47.314 "num_base_bdevs_discovered": 3, 00:23:47.314 "num_base_bdevs_operational": 3, 00:23:47.314 "base_bdevs_list": [ 00:23:47.314 { 00:23:47.314 "name": "spare", 00:23:47.314 "uuid": "08d21220-68d6-52e2-8d11-8ee7617fdf84", 00:23:47.314 "is_configured": true, 00:23:47.314 "data_offset": 0, 00:23:47.314 "data_size": 65536 00:23:47.314 }, 00:23:47.314 { 00:23:47.314 "name": null, 00:23:47.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.314 "is_configured": false, 00:23:47.314 "data_offset": 0, 00:23:47.314 "data_size": 65536 00:23:47.314 }, 00:23:47.314 { 00:23:47.314 "name": "BaseBdev3", 00:23:47.314 "uuid": "01f2cfef-ce83-4371-b48f-142a2edcae74", 00:23:47.314 "is_configured": true, 00:23:47.314 "data_offset": 0, 00:23:47.314 "data_size": 65536 00:23:47.314 }, 00:23:47.314 { 00:23:47.314 "name": "BaseBdev4", 00:23:47.314 "uuid": "7f705937-af47-42ad-855b-f56ad61766b8", 00:23:47.314 "is_configured": true, 00:23:47.314 "data_offset": 0, 00:23:47.314 "data_size": 65536 00:23:47.314 } 00:23:47.314 ] 00:23:47.314 }' 00:23:47.314 05:06:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.314 05:06:17 -- common/autotest_common.sh@10 -- # set +x 00:23:47.882 05:06:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:48.140 [2024-04-27 05:06:18.048983] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.140 [2024-04-27 05:06:18.049287] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.398 00:23:48.398 Latency(us) 00:23:48.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.398 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:48.398 raid_bdev1 : 11.81 91.88 275.65 0.00 0.00 15410.15 323.96 121539.49 00:23:48.398 =================================================================================================================== 00:23:48.398 Total : 91.88 275.65 0.00 0.00 15410.15 323.96 121539.49 00:23:48.398 [2024-04-27 05:06:18.138691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.398 0 00:23:48.398 [2024-04-27 05:06:18.138919] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.398 [2024-04-27 05:06:18.139181] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.398 [2024-04-27 05:06:18.139300] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:48.398 05:06:18 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.398 05:06:18 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:48.657 05:06:18 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:48.657 05:06:18 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:48.657 05:06:18 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@12 -- # local i 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:48.657 05:06:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:48.917 /dev/nbd0 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:48.917 05:06:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:48.917 05:06:18 -- common/autotest_common.sh@857 -- # local i 00:23:48.917 05:06:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:48.917 05:06:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:48.917 05:06:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:48.917 05:06:18 -- common/autotest_common.sh@861 -- # break 00:23:48.917 05:06:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:48.917 05:06:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:48.917 05:06:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:48.917 1+0 records in 00:23:48.917 1+0 records out 00:23:48.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811802 s, 5.0 MB/s 00:23:48.917 05:06:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.917 05:06:18 -- common/autotest_common.sh@874 -- # size=4096 00:23:48.917 05:06:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.917 05:06:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:48.917 05:06:18 -- common/autotest_common.sh@877 -- # return 0 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@678 -- # continue 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:48.917 05:06:18 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@12 -- # local i 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:48.917 05:06:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:49.176 /dev/nbd1 00:23:49.176 05:06:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:49.176 05:06:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:49.176 05:06:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:49.176 05:06:19 -- common/autotest_common.sh@857 -- # local i 00:23:49.176 05:06:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:49.176 05:06:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:49.176 05:06:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:49.176 05:06:19 -- common/autotest_common.sh@861 -- # break 00:23:49.176 05:06:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:49.176 05:06:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:49.176 05:06:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:49.176 1+0 records in 00:23:49.176 1+0 records out 00:23:49.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442459 s, 9.3 MB/s 00:23:49.176 05:06:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.176 05:06:19 -- common/autotest_common.sh@874 -- # size=4096 00:23:49.176 05:06:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.176 05:06:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:49.176 05:06:19 -- common/autotest_common.sh@877 -- # return 0 00:23:49.176 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:49.176 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.176 05:06:19 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:49.435 05:06:19 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@51 -- # local i 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.435 05:06:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@41 -- # break 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.695 05:06:19 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:49.695 05:06:19 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:49.695 05:06:19 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@12 -- # local i 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.695 05:06:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:50.010 /dev/nbd1 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:50.010 05:06:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:50.010 05:06:19 -- common/autotest_common.sh@857 -- # local i 00:23:50.010 05:06:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:50.010 05:06:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:50.010 05:06:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:50.010 05:06:19 -- common/autotest_common.sh@861 -- # break 00:23:50.010 05:06:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:50.010 05:06:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:50.010 05:06:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:50.010 1+0 records in 00:23:50.010 1+0 records out 00:23:50.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111305 s, 3.7 MB/s 00:23:50.010 05:06:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.010 05:06:19 -- common/autotest_common.sh@874 -- # size=4096 00:23:50.010 05:06:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.010 05:06:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:50.010 05:06:19 -- common/autotest_common.sh@877 -- # return 0 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.010 05:06:19 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:50.010 05:06:19 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.010 05:06:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:50.011 05:06:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:50.011 05:06:19 -- bdev/nbd_common.sh@51 -- # local i 00:23:50.011 05:06:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:50.011 05:06:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@41 -- # break 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@45 -- # return 0 00:23:50.269 05:06:20 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@51 -- # local i 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:50.269 05:06:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@41 -- # break 00:23:50.529 05:06:20 -- bdev/nbd_common.sh@45 -- # return 0 00:23:50.529 05:06:20 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:50.529 05:06:20 -- bdev/bdev_raid.sh@709 -- # killprocess 138183 00:23:50.529 05:06:20 -- common/autotest_common.sh@926 -- # '[' -z 138183 ']' 00:23:50.529 05:06:20 -- common/autotest_common.sh@930 -- # kill -0 138183 00:23:50.529 05:06:20 -- common/autotest_common.sh@931 -- # uname 00:23:50.529 05:06:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:50.529 05:06:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138183 00:23:50.529 killing process with pid 138183 00:23:50.529 Received shutdown signal, test time was about 14.041634 seconds 00:23:50.529 00:23:50.529 Latency(us) 00:23:50.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.529 =================================================================================================================== 00:23:50.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.529 05:06:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:50.529 05:06:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:50.529 05:06:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138183' 00:23:50.529 05:06:20 -- common/autotest_common.sh@945 -- # kill 138183 00:23:50.529 05:06:20 -- common/autotest_common.sh@950 -- # wait 138183 00:23:50.529 [2024-04-27 05:06:20.366271] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:50.788 [2024-04-27 05:06:20.446416] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:51.047 ************************************ 00:23:51.047 END TEST raid_rebuild_test_io 00:23:51.047 ************************************ 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:51.047 00:23:51.047 real 0m19.208s 00:23:51.047 user 0m30.530s 00:23:51.047 sys 0m2.566s 00:23:51.047 05:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.047 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:23:51.047 05:06:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:51.047 05:06:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:51.047 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.047 ************************************ 00:23:51.047 START TEST raid_rebuild_test_sb_io 00:23:51.047 ************************************ 00:23:51.047 05:06:20 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=138694 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:51.047 05:06:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138694 /var/tmp/spdk-raid.sock 00:23:51.047 05:06:20 -- common/autotest_common.sh@819 -- # '[' -z 138694 ']' 00:23:51.047 05:06:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:51.047 05:06:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:51.047 05:06:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:51.047 05:06:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:51.047 05:06:20 -- common/autotest_common.sh@10 -- # set +x 00:23:51.047 [2024-04-27 05:06:20.933854] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:51.047 [2024-04-27 05:06:20.934729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138694 ] 00:23:51.047 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:51.047 Zero copy mechanism will not be used. 00:23:51.306 [2024-04-27 05:06:21.105046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.565 [2024-04-27 05:06:21.224253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.565 [2024-04-27 05:06:21.300769] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:52.132 05:06:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:52.132 05:06:21 -- common/autotest_common.sh@852 -- # return 0 00:23:52.132 05:06:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:52.132 05:06:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:52.132 05:06:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:52.391 BaseBdev1_malloc 00:23:52.391 05:06:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:52.649 [2024-04-27 05:06:22.429299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:52.649 [2024-04-27 05:06:22.429455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.649 [2024-04-27 05:06:22.429514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:52.649 [2024-04-27 05:06:22.429588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.649 [2024-04-27 05:06:22.432589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.649 [2024-04-27 05:06:22.432661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:52.649 BaseBdev1 00:23:52.649 05:06:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:52.649 05:06:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:52.649 05:06:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:52.908 BaseBdev2_malloc 00:23:52.908 05:06:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:53.175 [2024-04-27 05:06:22.952218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:53.175 [2024-04-27 05:06:22.952345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.175 [2024-04-27 05:06:22.952405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:53.175 [2024-04-27 05:06:22.952472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.175 [2024-04-27 05:06:22.955259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.175 [2024-04-27 05:06:22.955317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:53.175 BaseBdev2 00:23:53.175 05:06:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:53.175 05:06:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:53.175 05:06:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:53.435 BaseBdev3_malloc 00:23:53.435 05:06:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:53.693 [2024-04-27 05:06:23.463668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:53.693 [2024-04-27 05:06:23.463797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.693 [2024-04-27 05:06:23.463853] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:53.693 [2024-04-27 05:06:23.463908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.693 [2024-04-27 05:06:23.466728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.693 [2024-04-27 05:06:23.466797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:53.693 BaseBdev3 00:23:53.693 05:06:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:53.693 05:06:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:53.693 05:06:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:53.951 BaseBdev4_malloc 00:23:53.951 05:06:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:54.209 [2024-04-27 05:06:24.010675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:54.209 [2024-04-27 05:06:24.010815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.209 [2024-04-27 05:06:24.010865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:54.209 [2024-04-27 05:06:24.010934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.209 [2024-04-27 05:06:24.014175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.209 [2024-04-27 05:06:24.014241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:54.209 BaseBdev4 00:23:54.209 05:06:24 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:54.467 spare_malloc 00:23:54.467 05:06:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:54.726 spare_delay 00:23:54.726 05:06:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:54.984 [2024-04-27 05:06:24.771033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:54.984 [2024-04-27 05:06:24.771170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.984 [2024-04-27 05:06:24.771219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:54.984 [2024-04-27 05:06:24.771289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.984 [2024-04-27 05:06:24.774248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.984 [2024-04-27 05:06:24.774321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:54.984 spare 00:23:54.984 05:06:24 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:55.242 [2024-04-27 05:06:25.015357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.243 [2024-04-27 05:06:25.017936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:55.243 [2024-04-27 05:06:25.018053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:55.243 [2024-04-27 05:06:25.018122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:55.243 [2024-04-27 05:06:25.018407] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:55.243 [2024-04-27 05:06:25.018433] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:55.243 [2024-04-27 05:06:25.018630] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:55.243 [2024-04-27 05:06:25.019140] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:55.243 [2024-04-27 05:06:25.019165] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:55.243 [2024-04-27 05:06:25.019409] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.243 05:06:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.501 05:06:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.501 "name": "raid_bdev1", 00:23:55.501 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:23:55.501 "strip_size_kb": 0, 00:23:55.501 "state": "online", 00:23:55.501 "raid_level": "raid1", 00:23:55.501 "superblock": true, 00:23:55.501 "num_base_bdevs": 4, 00:23:55.501 "num_base_bdevs_discovered": 4, 00:23:55.501 "num_base_bdevs_operational": 4, 00:23:55.501 "base_bdevs_list": [ 00:23:55.501 { 00:23:55.501 "name": "BaseBdev1", 00:23:55.501 "uuid": "e9e73b9e-0b61-5e9a-af90-e49c8fdbe502", 00:23:55.501 "is_configured": true, 00:23:55.501 "data_offset": 2048, 00:23:55.501 "data_size": 63488 00:23:55.501 }, 00:23:55.501 { 00:23:55.501 "name": "BaseBdev2", 00:23:55.501 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:23:55.501 "is_configured": true, 00:23:55.501 "data_offset": 2048, 00:23:55.501 "data_size": 63488 00:23:55.501 }, 00:23:55.501 { 00:23:55.501 "name": "BaseBdev3", 00:23:55.501 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:23:55.501 "is_configured": true, 00:23:55.501 "data_offset": 2048, 00:23:55.501 "data_size": 63488 00:23:55.501 }, 00:23:55.501 { 00:23:55.501 "name": "BaseBdev4", 00:23:55.501 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:23:55.501 "is_configured": true, 00:23:55.501 "data_offset": 2048, 00:23:55.501 "data_size": 63488 00:23:55.501 } 00:23:55.501 ] 00:23:55.501 }' 00:23:55.501 05:06:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.501 05:06:25 -- common/autotest_common.sh@10 -- # set +x 00:23:56.068 05:06:25 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:56.068 05:06:25 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:56.329 [2024-04-27 05:06:26.164029] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.329 05:06:26 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:56.329 05:06:26 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.329 05:06:26 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:56.593 05:06:26 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:56.593 05:06:26 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:56.593 05:06:26 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:56.593 05:06:26 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:56.852 [2024-04-27 05:06:26.555286] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:56.852 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:56.852 Zero copy mechanism will not be used. 00:23:56.852 Running I/O for 60 seconds... 00:23:56.852 [2024-04-27 05:06:26.700213] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:56.852 [2024-04-27 05:06:26.708196] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.852 05:06:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.110 05:06:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.110 "name": "raid_bdev1", 00:23:57.110 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:23:57.110 "strip_size_kb": 0, 00:23:57.110 "state": "online", 00:23:57.110 "raid_level": "raid1", 00:23:57.110 "superblock": true, 00:23:57.110 "num_base_bdevs": 4, 00:23:57.110 "num_base_bdevs_discovered": 3, 00:23:57.110 "num_base_bdevs_operational": 3, 00:23:57.110 "base_bdevs_list": [ 00:23:57.110 { 00:23:57.110 "name": null, 00:23:57.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.110 "is_configured": false, 00:23:57.110 "data_offset": 2048, 00:23:57.110 "data_size": 63488 00:23:57.110 }, 00:23:57.110 { 00:23:57.110 "name": "BaseBdev2", 00:23:57.110 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:23:57.110 "is_configured": true, 00:23:57.110 "data_offset": 2048, 00:23:57.110 "data_size": 63488 00:23:57.110 }, 00:23:57.110 { 00:23:57.110 "name": "BaseBdev3", 00:23:57.110 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:23:57.110 "is_configured": true, 00:23:57.110 "data_offset": 2048, 00:23:57.110 "data_size": 63488 00:23:57.110 }, 00:23:57.110 { 00:23:57.110 "name": "BaseBdev4", 00:23:57.110 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:23:57.110 "is_configured": true, 00:23:57.110 "data_offset": 2048, 00:23:57.110 "data_size": 63488 00:23:57.110 } 00:23:57.110 ] 00:23:57.110 }' 00:23:57.110 05:06:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.110 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:58.045 05:06:27 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:58.045 [2024-04-27 05:06:27.952397] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:58.045 [2024-04-27 05:06:27.952505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:58.304 05:06:28 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:58.304 [2024-04-27 05:06:28.041971] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:58.304 [2024-04-27 05:06:28.044686] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:58.304 [2024-04-27 05:06:28.167079] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:58.304 [2024-04-27 05:06:28.167839] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:58.562 [2024-04-27 05:06:28.312540] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:58.562 [2024-04-27 05:06:28.313017] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:58.820 [2024-04-27 05:06:28.667140] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:58.820 [2024-04-27 05:06:28.668753] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:59.079 [2024-04-27 05:06:28.884833] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:59.079 [2024-04-27 05:06:28.885431] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.337 05:06:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.337 [2024-04-27 05:06:29.115033] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:59.337 [2024-04-27 05:06:29.115853] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:59.596 [2024-04-27 05:06:29.248380] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:59.596 [2024-04-27 05:06:29.248845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:59.596 "name": "raid_bdev1", 00:23:59.596 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:23:59.596 "strip_size_kb": 0, 00:23:59.596 "state": "online", 00:23:59.596 "raid_level": "raid1", 00:23:59.596 "superblock": true, 00:23:59.596 "num_base_bdevs": 4, 00:23:59.596 "num_base_bdevs_discovered": 4, 00:23:59.596 "num_base_bdevs_operational": 4, 00:23:59.596 "process": { 00:23:59.596 "type": "rebuild", 00:23:59.596 "target": "spare", 00:23:59.596 "progress": { 00:23:59.596 "blocks": 16384, 00:23:59.596 "percent": 25 00:23:59.596 } 00:23:59.596 }, 00:23:59.596 "base_bdevs_list": [ 00:23:59.596 { 00:23:59.596 "name": "spare", 00:23:59.596 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:23:59.596 "is_configured": true, 00:23:59.596 "data_offset": 2048, 00:23:59.596 "data_size": 63488 00:23:59.596 }, 00:23:59.596 { 00:23:59.596 "name": "BaseBdev2", 00:23:59.596 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:23:59.596 "is_configured": true, 00:23:59.596 "data_offset": 2048, 00:23:59.596 "data_size": 63488 00:23:59.596 }, 00:23:59.596 { 00:23:59.596 "name": "BaseBdev3", 00:23:59.596 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:23:59.596 "is_configured": true, 00:23:59.596 "data_offset": 2048, 00:23:59.596 "data_size": 63488 00:23:59.596 }, 00:23:59.596 { 00:23:59.596 "name": "BaseBdev4", 00:23:59.596 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:23:59.596 "is_configured": true, 00:23:59.596 "data_offset": 2048, 00:23:59.596 "data_size": 63488 00:23:59.596 } 00:23:59.596 ] 00:23:59.596 }' 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.596 05:06:29 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:59.854 [2024-04-27 05:06:29.596211] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:59.854 [2024-04-27 05:06:29.649165] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:59.854 [2024-04-27 05:06:29.712651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:00.112 [2024-04-27 05:06:29.812691] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:00.112 [2024-04-27 05:06:29.828812] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.113 [2024-04-27 05:06:29.877232] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.113 05:06:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.371 05:06:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.371 "name": "raid_bdev1", 00:24:00.371 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:00.371 "strip_size_kb": 0, 00:24:00.371 "state": "online", 00:24:00.371 "raid_level": "raid1", 00:24:00.371 "superblock": true, 00:24:00.371 "num_base_bdevs": 4, 00:24:00.371 "num_base_bdevs_discovered": 3, 00:24:00.371 "num_base_bdevs_operational": 3, 00:24:00.371 "base_bdevs_list": [ 00:24:00.371 { 00:24:00.371 "name": null, 00:24:00.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.371 "is_configured": false, 00:24:00.371 "data_offset": 2048, 00:24:00.371 "data_size": 63488 00:24:00.371 }, 00:24:00.371 { 00:24:00.371 "name": "BaseBdev2", 00:24:00.371 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:24:00.371 "is_configured": true, 00:24:00.371 "data_offset": 2048, 00:24:00.371 "data_size": 63488 00:24:00.371 }, 00:24:00.371 { 00:24:00.371 "name": "BaseBdev3", 00:24:00.371 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:00.371 "is_configured": true, 00:24:00.371 "data_offset": 2048, 00:24:00.371 "data_size": 63488 00:24:00.371 }, 00:24:00.371 { 00:24:00.371 "name": "BaseBdev4", 00:24:00.371 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:00.371 "is_configured": true, 00:24:00.371 "data_offset": 2048, 00:24:00.371 "data_size": 63488 00:24:00.371 } 00:24:00.371 ] 00:24:00.371 }' 00:24:00.371 05:06:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.371 05:06:30 -- common/autotest_common.sh@10 -- # set +x 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.306 05:06:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.306 05:06:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.306 "name": "raid_bdev1", 00:24:01.306 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:01.306 "strip_size_kb": 0, 00:24:01.306 "state": "online", 00:24:01.306 "raid_level": "raid1", 00:24:01.306 "superblock": true, 00:24:01.306 "num_base_bdevs": 4, 00:24:01.306 "num_base_bdevs_discovered": 3, 00:24:01.306 "num_base_bdevs_operational": 3, 00:24:01.306 "base_bdevs_list": [ 00:24:01.306 { 00:24:01.306 "name": null, 00:24:01.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.306 "is_configured": false, 00:24:01.306 "data_offset": 2048, 00:24:01.306 "data_size": 63488 00:24:01.306 }, 00:24:01.306 { 00:24:01.306 "name": "BaseBdev2", 00:24:01.306 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:24:01.306 "is_configured": true, 00:24:01.306 "data_offset": 2048, 00:24:01.306 "data_size": 63488 00:24:01.306 }, 00:24:01.306 { 00:24:01.306 "name": "BaseBdev3", 00:24:01.306 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:01.306 "is_configured": true, 00:24:01.306 "data_offset": 2048, 00:24:01.306 "data_size": 63488 00:24:01.306 }, 00:24:01.306 { 00:24:01.306 "name": "BaseBdev4", 00:24:01.306 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:01.306 "is_configured": true, 00:24:01.306 "data_offset": 2048, 00:24:01.306 "data_size": 63488 00:24:01.306 } 00:24:01.306 ] 00:24:01.306 }' 00:24:01.306 05:06:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:01.306 05:06:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:01.306 05:06:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:01.563 05:06:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:01.563 05:06:31 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.821 [2024-04-27 05:06:31.544722] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:01.822 [2024-04-27 05:06:31.544802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.822 05:06:31 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:01.822 [2024-04-27 05:06:31.616814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:01.822 [2024-04-27 05:06:31.619428] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:02.080 [2024-04-27 05:06:31.739240] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:02.080 [2024-04-27 05:06:31.741095] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:02.080 [2024-04-27 05:06:31.951852] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:02.080 [2024-04-27 05:06:31.952347] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.048 [2024-04-27 05:06:32.732193] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:03.048 [2024-04-27 05:06:32.866758] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.048 "name": "raid_bdev1", 00:24:03.048 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:03.048 "strip_size_kb": 0, 00:24:03.048 "state": "online", 00:24:03.048 "raid_level": "raid1", 00:24:03.048 "superblock": true, 00:24:03.048 "num_base_bdevs": 4, 00:24:03.048 "num_base_bdevs_discovered": 4, 00:24:03.048 "num_base_bdevs_operational": 4, 00:24:03.048 "process": { 00:24:03.048 "type": "rebuild", 00:24:03.048 "target": "spare", 00:24:03.048 "progress": { 00:24:03.048 "blocks": 14336, 00:24:03.048 "percent": 22 00:24:03.048 } 00:24:03.048 }, 00:24:03.048 "base_bdevs_list": [ 00:24:03.048 { 00:24:03.048 "name": "spare", 00:24:03.048 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:03.048 "is_configured": true, 00:24:03.048 "data_offset": 2048, 00:24:03.048 "data_size": 63488 00:24:03.048 }, 00:24:03.048 { 00:24:03.048 "name": "BaseBdev2", 00:24:03.048 "uuid": "6db5b5c6-323b-5de9-870e-33e67b264c97", 00:24:03.048 "is_configured": true, 00:24:03.048 "data_offset": 2048, 00:24:03.048 "data_size": 63488 00:24:03.048 }, 00:24:03.048 { 00:24:03.048 "name": "BaseBdev3", 00:24:03.048 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:03.048 "is_configured": true, 00:24:03.048 "data_offset": 2048, 00:24:03.048 "data_size": 63488 00:24:03.048 }, 00:24:03.048 { 00:24:03.048 "name": "BaseBdev4", 00:24:03.048 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:03.048 "is_configured": true, 00:24:03.048 "data_offset": 2048, 00:24:03.048 "data_size": 63488 00:24:03.048 } 00:24:03.048 ] 00:24:03.048 }' 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.048 05:06:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:03.307 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:03.307 05:06:32 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:03.565 [2024-04-27 05:06:33.219033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:03.565 [2024-04-27 05:06:33.245339] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:03.565 [2024-04-27 05:06:33.360773] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:24:03.565 [2024-04-27 05:06:33.360843] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:24:03.565 [2024-04-27 05:06:33.360914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:03.565 [2024-04-27 05:06:33.372003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.824 05:06:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.824 [2024-04-27 05:06:33.728869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.083 "name": "raid_bdev1", 00:24:04.083 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:04.083 "strip_size_kb": 0, 00:24:04.083 "state": "online", 00:24:04.083 "raid_level": "raid1", 00:24:04.083 "superblock": true, 00:24:04.083 "num_base_bdevs": 4, 00:24:04.083 "num_base_bdevs_discovered": 3, 00:24:04.083 "num_base_bdevs_operational": 3, 00:24:04.083 "process": { 00:24:04.083 "type": "rebuild", 00:24:04.083 "target": "spare", 00:24:04.083 "progress": { 00:24:04.083 "blocks": 26624, 00:24:04.083 "percent": 41 00:24:04.083 } 00:24:04.083 }, 00:24:04.083 "base_bdevs_list": [ 00:24:04.083 { 00:24:04.083 "name": "spare", 00:24:04.083 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:04.083 "is_configured": true, 00:24:04.083 "data_offset": 2048, 00:24:04.083 "data_size": 63488 00:24:04.083 }, 00:24:04.083 { 00:24:04.083 "name": null, 00:24:04.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.083 "is_configured": false, 00:24:04.083 "data_offset": 2048, 00:24:04.083 "data_size": 63488 00:24:04.083 }, 00:24:04.083 { 00:24:04.083 "name": "BaseBdev3", 00:24:04.083 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:04.083 "is_configured": true, 00:24:04.083 "data_offset": 2048, 00:24:04.083 "data_size": 63488 00:24:04.083 }, 00:24:04.083 { 00:24:04.083 "name": "BaseBdev4", 00:24:04.083 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:04.083 "is_configured": true, 00:24:04.083 "data_offset": 2048, 00:24:04.083 "data_size": 63488 00:24:04.083 } 00:24:04.083 ] 00:24:04.083 }' 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.083 [2024-04-27 05:06:33.849686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@657 -- # local timeout=559 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.083 05:06:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.342 "name": "raid_bdev1", 00:24:04.342 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:04.342 "strip_size_kb": 0, 00:24:04.342 "state": "online", 00:24:04.342 "raid_level": "raid1", 00:24:04.342 "superblock": true, 00:24:04.342 "num_base_bdevs": 4, 00:24:04.342 "num_base_bdevs_discovered": 3, 00:24:04.342 "num_base_bdevs_operational": 3, 00:24:04.342 "process": { 00:24:04.342 "type": "rebuild", 00:24:04.342 "target": "spare", 00:24:04.342 "progress": { 00:24:04.342 "blocks": 28672, 00:24:04.342 "percent": 45 00:24:04.342 } 00:24:04.342 }, 00:24:04.342 "base_bdevs_list": [ 00:24:04.342 { 00:24:04.342 "name": "spare", 00:24:04.342 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:04.342 "is_configured": true, 00:24:04.342 "data_offset": 2048, 00:24:04.342 "data_size": 63488 00:24:04.342 }, 00:24:04.342 { 00:24:04.342 "name": null, 00:24:04.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.342 "is_configured": false, 00:24:04.342 "data_offset": 2048, 00:24:04.342 "data_size": 63488 00:24:04.342 }, 00:24:04.342 { 00:24:04.342 "name": "BaseBdev3", 00:24:04.342 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:04.342 "is_configured": true, 00:24:04.342 "data_offset": 2048, 00:24:04.342 "data_size": 63488 00:24:04.342 }, 00:24:04.342 { 00:24:04.342 "name": "BaseBdev4", 00:24:04.342 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:04.342 "is_configured": true, 00:24:04.342 "data_offset": 2048, 00:24:04.342 "data_size": 63488 00:24:04.342 } 00:24:04.342 ] 00:24:04.342 }' 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.342 [2024-04-27 05:06:34.198653] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.342 05:06:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:04.909 [2024-04-27 05:06:34.818759] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:04.909 [2024-04-27 05:06:34.820010] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:05.475 05:06:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.475 05:06:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.476 05:06:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.476 [2024-04-27 05:06:35.271291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.733 "name": "raid_bdev1", 00:24:05.733 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:05.733 "strip_size_kb": 0, 00:24:05.733 "state": "online", 00:24:05.733 "raid_level": "raid1", 00:24:05.733 "superblock": true, 00:24:05.733 "num_base_bdevs": 4, 00:24:05.733 "num_base_bdevs_discovered": 3, 00:24:05.733 "num_base_bdevs_operational": 3, 00:24:05.733 "process": { 00:24:05.733 "type": "rebuild", 00:24:05.733 "target": "spare", 00:24:05.733 "progress": { 00:24:05.733 "blocks": 53248, 00:24:05.733 "percent": 83 00:24:05.733 } 00:24:05.733 }, 00:24:05.733 "base_bdevs_list": [ 00:24:05.733 { 00:24:05.733 "name": "spare", 00:24:05.733 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:05.733 "is_configured": true, 00:24:05.733 "data_offset": 2048, 00:24:05.733 "data_size": 63488 00:24:05.733 }, 00:24:05.733 { 00:24:05.733 "name": null, 00:24:05.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.733 "is_configured": false, 00:24:05.733 "data_offset": 2048, 00:24:05.733 "data_size": 63488 00:24:05.733 }, 00:24:05.733 { 00:24:05.733 "name": "BaseBdev3", 00:24:05.733 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:05.733 "is_configured": true, 00:24:05.733 "data_offset": 2048, 00:24:05.733 "data_size": 63488 00:24:05.733 }, 00:24:05.733 { 00:24:05.733 "name": "BaseBdev4", 00:24:05.733 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:05.733 "is_configured": true, 00:24:05.733 "data_offset": 2048, 00:24:05.733 "data_size": 63488 00:24:05.733 } 00:24:05.733 ] 00:24:05.733 }' 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.733 05:06:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:05.991 [2024-04-27 05:06:35.723249] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:06.249 [2024-04-27 05:06:36.066660] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:06.506 [2024-04-27 05:06:36.174403] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:06.506 [2024-04-27 05:06:36.178197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.793 05:06:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.052 "name": "raid_bdev1", 00:24:07.052 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:07.052 "strip_size_kb": 0, 00:24:07.052 "state": "online", 00:24:07.052 "raid_level": "raid1", 00:24:07.052 "superblock": true, 00:24:07.052 "num_base_bdevs": 4, 00:24:07.052 "num_base_bdevs_discovered": 3, 00:24:07.052 "num_base_bdevs_operational": 3, 00:24:07.052 "base_bdevs_list": [ 00:24:07.052 { 00:24:07.052 "name": "spare", 00:24:07.052 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:07.052 "is_configured": true, 00:24:07.052 "data_offset": 2048, 00:24:07.052 "data_size": 63488 00:24:07.052 }, 00:24:07.052 { 00:24:07.052 "name": null, 00:24:07.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.052 "is_configured": false, 00:24:07.052 "data_offset": 2048, 00:24:07.052 "data_size": 63488 00:24:07.052 }, 00:24:07.052 { 00:24:07.052 "name": "BaseBdev3", 00:24:07.052 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:07.052 "is_configured": true, 00:24:07.052 "data_offset": 2048, 00:24:07.052 "data_size": 63488 00:24:07.052 }, 00:24:07.052 { 00:24:07.052 "name": "BaseBdev4", 00:24:07.052 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:07.052 "is_configured": true, 00:24:07.052 "data_offset": 2048, 00:24:07.052 "data_size": 63488 00:24:07.052 } 00:24:07.052 ] 00:24:07.052 }' 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@660 -- # break 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.052 05:06:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.311 05:06:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.311 "name": "raid_bdev1", 00:24:07.311 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:07.311 "strip_size_kb": 0, 00:24:07.311 "state": "online", 00:24:07.311 "raid_level": "raid1", 00:24:07.311 "superblock": true, 00:24:07.311 "num_base_bdevs": 4, 00:24:07.311 "num_base_bdevs_discovered": 3, 00:24:07.311 "num_base_bdevs_operational": 3, 00:24:07.311 "base_bdevs_list": [ 00:24:07.311 { 00:24:07.311 "name": "spare", 00:24:07.311 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:07.311 "is_configured": true, 00:24:07.311 "data_offset": 2048, 00:24:07.311 "data_size": 63488 00:24:07.311 }, 00:24:07.311 { 00:24:07.311 "name": null, 00:24:07.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.311 "is_configured": false, 00:24:07.311 "data_offset": 2048, 00:24:07.311 "data_size": 63488 00:24:07.311 }, 00:24:07.311 { 00:24:07.311 "name": "BaseBdev3", 00:24:07.311 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:07.311 "is_configured": true, 00:24:07.311 "data_offset": 2048, 00:24:07.311 "data_size": 63488 00:24:07.311 }, 00:24:07.311 { 00:24:07.311 "name": "BaseBdev4", 00:24:07.311 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:07.311 "is_configured": true, 00:24:07.311 "data_offset": 2048, 00:24:07.311 "data_size": 63488 00:24:07.311 } 00:24:07.311 ] 00:24:07.311 }' 00:24:07.311 05:06:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.569 05:06:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.828 05:06:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.828 "name": "raid_bdev1", 00:24:07.828 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:07.828 "strip_size_kb": 0, 00:24:07.828 "state": "online", 00:24:07.828 "raid_level": "raid1", 00:24:07.828 "superblock": true, 00:24:07.828 "num_base_bdevs": 4, 00:24:07.828 "num_base_bdevs_discovered": 3, 00:24:07.828 "num_base_bdevs_operational": 3, 00:24:07.828 "base_bdevs_list": [ 00:24:07.828 { 00:24:07.828 "name": "spare", 00:24:07.828 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:07.828 "is_configured": true, 00:24:07.828 "data_offset": 2048, 00:24:07.828 "data_size": 63488 00:24:07.828 }, 00:24:07.828 { 00:24:07.828 "name": null, 00:24:07.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.828 "is_configured": false, 00:24:07.828 "data_offset": 2048, 00:24:07.828 "data_size": 63488 00:24:07.828 }, 00:24:07.828 { 00:24:07.828 "name": "BaseBdev3", 00:24:07.828 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:07.828 "is_configured": true, 00:24:07.828 "data_offset": 2048, 00:24:07.828 "data_size": 63488 00:24:07.828 }, 00:24:07.828 { 00:24:07.828 "name": "BaseBdev4", 00:24:07.828 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:07.828 "is_configured": true, 00:24:07.828 "data_offset": 2048, 00:24:07.828 "data_size": 63488 00:24:07.828 } 00:24:07.828 ] 00:24:07.828 }' 00:24:07.828 05:06:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.828 05:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:08.395 05:06:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:08.654 [2024-04-27 05:06:38.452444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:08.654 [2024-04-27 05:06:38.452506] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.654 00:24:08.654 Latency(us) 00:24:08.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.654 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:08.654 raid_bdev1 : 11.96 78.86 236.58 0.00 0.00 18038.74 331.40 119632.99 00:24:08.654 =================================================================================================================== 00:24:08.654 Total : 78.86 236.58 0.00 0.00 18038.74 331.40 119632.99 00:24:08.654 [2024-04-27 05:06:38.522040] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.654 [2024-04-27 05:06:38.522126] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.654 [2024-04-27 05:06:38.522266] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.654 [2024-04-27 05:06:38.522283] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:08.654 0 00:24:08.654 05:06:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.654 05:06:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:09.221 05:06:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:09.221 05:06:38 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:24:09.221 05:06:38 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@12 -- # local i 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.221 05:06:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:09.221 /dev/nbd0 00:24:09.221 05:06:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:09.221 05:06:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:09.221 05:06:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:09.221 05:06:39 -- common/autotest_common.sh@857 -- # local i 00:24:09.221 05:06:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.221 05:06:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.221 05:06:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:09.221 05:06:39 -- common/autotest_common.sh@861 -- # break 00:24:09.221 05:06:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.221 05:06:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.221 05:06:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.221 1+0 records in 00:24:09.221 1+0 records out 00:24:09.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622923 s, 6.6 MB/s 00:24:09.221 05:06:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.221 05:06:39 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.221 05:06:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.522 05:06:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.522 05:06:39 -- common/autotest_common.sh@877 -- # return 0 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@678 -- # continue 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:24:09.522 05:06:39 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@12 -- # local i 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:09.522 /dev/nbd1 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:09.522 05:06:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:09.522 05:06:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:09.522 05:06:39 -- common/autotest_common.sh@857 -- # local i 00:24:09.522 05:06:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.522 05:06:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.522 05:06:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:09.781 05:06:39 -- common/autotest_common.sh@861 -- # break 00:24:09.781 05:06:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.781 05:06:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.781 05:06:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.781 1+0 records in 00:24:09.781 1+0 records out 00:24:09.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346955 s, 11.8 MB/s 00:24:09.781 05:06:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.781 05:06:39 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.781 05:06:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.781 05:06:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.781 05:06:39 -- common/autotest_common.sh@877 -- # return 0 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.781 05:06:39 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:09.781 05:06:39 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@51 -- # local i 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.781 05:06:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@41 -- # break 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@45 -- # return 0 00:24:10.039 05:06:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:10.039 05:06:39 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:24:10.039 05:06:39 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@12 -- # local i 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.039 05:06:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:10.298 /dev/nbd1 00:24:10.298 05:06:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:10.298 05:06:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:10.298 05:06:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:10.298 05:06:40 -- common/autotest_common.sh@857 -- # local i 00:24:10.298 05:06:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:10.298 05:06:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:10.298 05:06:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:10.298 05:06:40 -- common/autotest_common.sh@861 -- # break 00:24:10.298 05:06:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:10.298 05:06:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:10.298 05:06:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.298 1+0 records in 00:24:10.298 1+0 records out 00:24:10.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836359 s, 4.9 MB/s 00:24:10.298 05:06:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.298 05:06:40 -- common/autotest_common.sh@874 -- # size=4096 00:24:10.298 05:06:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.298 05:06:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:10.298 05:06:40 -- common/autotest_common.sh@877 -- # return 0 00:24:10.298 05:06:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.298 05:06:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.298 05:06:40 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:10.298 05:06:40 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:10.298 05:06:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.557 05:06:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:10.557 05:06:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:10.557 05:06:40 -- bdev/nbd_common.sh@51 -- # local i 00:24:10.557 05:06:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:10.557 05:06:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@41 -- # break 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@45 -- # return 0 00:24:10.815 05:06:40 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@51 -- # local i 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:10.815 05:06:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@41 -- # break 00:24:11.073 05:06:40 -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.073 05:06:40 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:11.073 05:06:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:11.073 05:06:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:11.073 05:06:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:11.332 05:06:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:11.591 [2024-04-27 05:06:41.277694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:11.591 [2024-04-27 05:06:41.277821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.591 [2024-04-27 05:06:41.277878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:11.591 [2024-04-27 05:06:41.277906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.591 [2024-04-27 05:06:41.280762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.591 [2024-04-27 05:06:41.280841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.591 [2024-04-27 05:06:41.280975] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:11.591 [2024-04-27 05:06:41.281040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:11.591 BaseBdev1 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@696 -- # continue 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:11.591 05:06:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:11.849 05:06:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.107 [2024-04-27 05:06:41.761887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.107 [2024-04-27 05:06:41.762019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.107 [2024-04-27 05:06:41.762078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:12.107 [2024-04-27 05:06:41.762107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.107 [2024-04-27 05:06:41.762654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.107 [2024-04-27 05:06:41.762733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.107 [2024-04-27 05:06:41.762845] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:12.107 [2024-04-27 05:06:41.762862] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:24:12.107 [2024-04-27 05:06:41.762871] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.107 [2024-04-27 05:06:41.762905] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:24:12.107 [2024-04-27 05:06:41.762969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.107 BaseBdev3 00:24:12.107 05:06:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:12.107 05:06:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:12.107 05:06:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:12.365 05:06:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:12.623 [2024-04-27 05:06:42.338073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:12.623 [2024-04-27 05:06:42.338227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.623 [2024-04-27 05:06:42.338281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:12.623 [2024-04-27 05:06:42.338326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.623 [2024-04-27 05:06:42.338906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.623 [2024-04-27 05:06:42.338978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:12.623 [2024-04-27 05:06:42.339091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:12.623 [2024-04-27 05:06:42.339131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:12.623 BaseBdev4 00:24:12.623 05:06:42 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:12.881 05:06:42 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.143 [2024-04-27 05:06:42.860323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.143 [2024-04-27 05:06:42.860462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.143 [2024-04-27 05:06:42.860516] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:13.143 [2024-04-27 05:06:42.860551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.143 [2024-04-27 05:06:42.861164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.143 [2024-04-27 05:06:42.861242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.143 [2024-04-27 05:06:42.861377] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:13.143 [2024-04-27 05:06:42.861425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.143 spare 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.143 05:06:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.143 [2024-04-27 05:06:42.961584] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:24:13.143 [2024-04-27 05:06:42.961637] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:13.143 [2024-04-27 05:06:42.961867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:24:13.143 [2024-04-27 05:06:42.962396] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:24:13.143 [2024-04-27 05:06:42.962421] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:24:13.143 [2024-04-27 05:06:42.962605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.402 05:06:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.402 "name": "raid_bdev1", 00:24:13.402 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:13.402 "strip_size_kb": 0, 00:24:13.402 "state": "online", 00:24:13.403 "raid_level": "raid1", 00:24:13.403 "superblock": true, 00:24:13.403 "num_base_bdevs": 4, 00:24:13.403 "num_base_bdevs_discovered": 3, 00:24:13.403 "num_base_bdevs_operational": 3, 00:24:13.403 "base_bdevs_list": [ 00:24:13.403 { 00:24:13.403 "name": "spare", 00:24:13.403 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:13.403 "is_configured": true, 00:24:13.403 "data_offset": 2048, 00:24:13.403 "data_size": 63488 00:24:13.403 }, 00:24:13.403 { 00:24:13.403 "name": null, 00:24:13.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.403 "is_configured": false, 00:24:13.403 "data_offset": 2048, 00:24:13.403 "data_size": 63488 00:24:13.403 }, 00:24:13.403 { 00:24:13.403 "name": "BaseBdev3", 00:24:13.403 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:13.403 "is_configured": true, 00:24:13.403 "data_offset": 2048, 00:24:13.403 "data_size": 63488 00:24:13.403 }, 00:24:13.403 { 00:24:13.403 "name": "BaseBdev4", 00:24:13.403 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:13.403 "is_configured": true, 00:24:13.403 "data_offset": 2048, 00:24:13.403 "data_size": 63488 00:24:13.403 } 00:24:13.403 ] 00:24:13.403 }' 00:24:13.403 05:06:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.403 05:06:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.969 05:06:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.227 05:06:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.227 "name": "raid_bdev1", 00:24:14.227 "uuid": "e4e3c449-0f61-4df6-a6a1-480fe2a352da", 00:24:14.227 "strip_size_kb": 0, 00:24:14.227 "state": "online", 00:24:14.227 "raid_level": "raid1", 00:24:14.227 "superblock": true, 00:24:14.227 "num_base_bdevs": 4, 00:24:14.227 "num_base_bdevs_discovered": 3, 00:24:14.227 "num_base_bdevs_operational": 3, 00:24:14.227 "base_bdevs_list": [ 00:24:14.227 { 00:24:14.227 "name": "spare", 00:24:14.227 "uuid": "6fd01339-fdc5-539a-b734-d5c3d6eac2bd", 00:24:14.227 "is_configured": true, 00:24:14.227 "data_offset": 2048, 00:24:14.227 "data_size": 63488 00:24:14.227 }, 00:24:14.227 { 00:24:14.227 "name": null, 00:24:14.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.227 "is_configured": false, 00:24:14.227 "data_offset": 2048, 00:24:14.227 "data_size": 63488 00:24:14.227 }, 00:24:14.227 { 00:24:14.227 "name": "BaseBdev3", 00:24:14.227 "uuid": "ee25923a-1f4a-5fd7-bd0b-ac2d2c01ecf1", 00:24:14.227 "is_configured": true, 00:24:14.227 "data_offset": 2048, 00:24:14.227 "data_size": 63488 00:24:14.227 }, 00:24:14.227 { 00:24:14.227 "name": "BaseBdev4", 00:24:14.227 "uuid": "166b20b1-f14c-5e55-841a-1dbf962895a1", 00:24:14.227 "is_configured": true, 00:24:14.227 "data_offset": 2048, 00:24:14.227 "data_size": 63488 00:24:14.227 } 00:24:14.227 ] 00:24:14.227 }' 00:24:14.227 05:06:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.227 05:06:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.227 05:06:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.486 05:06:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:14.486 05:06:44 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.486 05:06:44 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:14.743 05:06:44 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.743 05:06:44 -- bdev/bdev_raid.sh@709 -- # killprocess 138694 00:24:14.743 05:06:44 -- common/autotest_common.sh@926 -- # '[' -z 138694 ']' 00:24:14.743 05:06:44 -- common/autotest_common.sh@930 -- # kill -0 138694 00:24:14.743 05:06:44 -- common/autotest_common.sh@931 -- # uname 00:24:14.743 05:06:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:14.743 05:06:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138694 00:24:14.743 killing process with pid 138694 00:24:14.743 Received shutdown signal, test time was about 17.899395 seconds 00:24:14.743 00:24:14.743 Latency(us) 00:24:14.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.743 =================================================================================================================== 00:24:14.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.743 05:06:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:14.743 05:06:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:14.743 05:06:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138694' 00:24:14.743 05:06:44 -- common/autotest_common.sh@945 -- # kill 138694 00:24:14.743 05:06:44 -- common/autotest_common.sh@950 -- # wait 138694 00:24:14.743 [2024-04-27 05:06:44.458045] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:14.743 [2024-04-27 05:06:44.458187] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:14.743 [2024-04-27 05:06:44.458300] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:14.743 [2024-04-27 05:06:44.458326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:24:14.743 [2024-04-27 05:06:44.546815] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:15.309 ************************************ 00:24:15.309 END TEST raid_rebuild_test_sb_io 00:24:15.309 ************************************ 00:24:15.309 00:24:15.309 real 0m24.055s 00:24:15.309 user 0m39.419s 00:24:15.309 sys 0m3.390s 00:24:15.309 05:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.309 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:24:15.309 05:06:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:15.309 05:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:15.309 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:15.309 ************************************ 00:24:15.309 START TEST raid5f_state_function_test 00:24:15.309 ************************************ 00:24:15.309 05:06:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:15.309 05:06:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=139316 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139316' 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:15.310 Process raid pid: 139316 00:24:15.310 05:06:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139316 /var/tmp/spdk-raid.sock 00:24:15.310 05:06:44 -- common/autotest_common.sh@819 -- # '[' -z 139316 ']' 00:24:15.310 05:06:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:15.310 05:06:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:15.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:15.310 05:06:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:15.310 05:06:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:15.310 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:15.310 [2024-04-27 05:06:45.056378] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:15.310 [2024-04-27 05:06:45.056658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.568 [2024-04-27 05:06:45.228924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.568 [2024-04-27 05:06:45.348920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.568 [2024-04-27 05:06:45.425421] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:16.135 05:06:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:16.135 05:06:45 -- common/autotest_common.sh@852 -- # return 0 00:24:16.135 05:06:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:16.398 [2024-04-27 05:06:46.242323] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:16.398 [2024-04-27 05:06:46.242689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:16.398 [2024-04-27 05:06:46.242816] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:16.398 [2024-04-27 05:06:46.242882] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:16.398 [2024-04-27 05:06:46.243037] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:16.398 [2024-04-27 05:06:46.243225] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.398 05:06:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.656 05:06:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.656 "name": "Existed_Raid", 00:24:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.656 "strip_size_kb": 64, 00:24:16.656 "state": "configuring", 00:24:16.656 "raid_level": "raid5f", 00:24:16.656 "superblock": false, 00:24:16.656 "num_base_bdevs": 3, 00:24:16.656 "num_base_bdevs_discovered": 0, 00:24:16.656 "num_base_bdevs_operational": 3, 00:24:16.656 "base_bdevs_list": [ 00:24:16.656 { 00:24:16.656 "name": "BaseBdev1", 00:24:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.656 "is_configured": false, 00:24:16.656 "data_offset": 0, 00:24:16.656 "data_size": 0 00:24:16.656 }, 00:24:16.656 { 00:24:16.656 "name": "BaseBdev2", 00:24:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.656 "is_configured": false, 00:24:16.656 "data_offset": 0, 00:24:16.656 "data_size": 0 00:24:16.656 }, 00:24:16.656 { 00:24:16.656 "name": "BaseBdev3", 00:24:16.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.657 "is_configured": false, 00:24:16.657 "data_offset": 0, 00:24:16.657 "data_size": 0 00:24:16.657 } 00:24:16.657 ] 00:24:16.657 }' 00:24:16.657 05:06:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.657 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:24:17.222 05:06:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:17.480 [2024-04-27 05:06:47.326654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:17.480 [2024-04-27 05:06:47.326987] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:17.480 05:06:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:17.739 [2024-04-27 05:06:47.606747] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.739 [2024-04-27 05:06:47.607104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.739 [2024-04-27 05:06:47.607230] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:17.739 [2024-04-27 05:06:47.607381] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:17.739 [2024-04-27 05:06:47.607494] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:17.739 [2024-04-27 05:06:47.607566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:17.739 05:06:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:17.997 [2024-04-27 05:06:47.855556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:17.997 BaseBdev1 00:24:17.997 05:06:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:17.997 05:06:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:17.997 05:06:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:17.997 05:06:47 -- common/autotest_common.sh@889 -- # local i 00:24:17.997 05:06:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:17.997 05:06:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:17.997 05:06:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.255 05:06:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:18.513 [ 00:24:18.513 { 00:24:18.513 "name": "BaseBdev1", 00:24:18.513 "aliases": [ 00:24:18.513 "d3f084e7-85e5-41cc-8141-b92b3114750d" 00:24:18.513 ], 00:24:18.513 "product_name": "Malloc disk", 00:24:18.513 "block_size": 512, 00:24:18.513 "num_blocks": 65536, 00:24:18.513 "uuid": "d3f084e7-85e5-41cc-8141-b92b3114750d", 00:24:18.513 "assigned_rate_limits": { 00:24:18.513 "rw_ios_per_sec": 0, 00:24:18.513 "rw_mbytes_per_sec": 0, 00:24:18.513 "r_mbytes_per_sec": 0, 00:24:18.513 "w_mbytes_per_sec": 0 00:24:18.513 }, 00:24:18.513 "claimed": true, 00:24:18.513 "claim_type": "exclusive_write", 00:24:18.513 "zoned": false, 00:24:18.513 "supported_io_types": { 00:24:18.513 "read": true, 00:24:18.513 "write": true, 00:24:18.513 "unmap": true, 00:24:18.513 "write_zeroes": true, 00:24:18.513 "flush": true, 00:24:18.513 "reset": true, 00:24:18.513 "compare": false, 00:24:18.513 "compare_and_write": false, 00:24:18.513 "abort": true, 00:24:18.513 "nvme_admin": false, 00:24:18.513 "nvme_io": false 00:24:18.513 }, 00:24:18.513 "memory_domains": [ 00:24:18.513 { 00:24:18.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.514 "dma_device_type": 2 00:24:18.514 } 00:24:18.514 ], 00:24:18.514 "driver_specific": {} 00:24:18.514 } 00:24:18.514 ] 00:24:18.514 05:06:48 -- common/autotest_common.sh@895 -- # return 0 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.514 05:06:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.772 05:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.772 "name": "Existed_Raid", 00:24:18.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.772 "strip_size_kb": 64, 00:24:18.772 "state": "configuring", 00:24:18.772 "raid_level": "raid5f", 00:24:18.772 "superblock": false, 00:24:18.772 "num_base_bdevs": 3, 00:24:18.772 "num_base_bdevs_discovered": 1, 00:24:18.772 "num_base_bdevs_operational": 3, 00:24:18.772 "base_bdevs_list": [ 00:24:18.772 { 00:24:18.772 "name": "BaseBdev1", 00:24:18.772 "uuid": "d3f084e7-85e5-41cc-8141-b92b3114750d", 00:24:18.772 "is_configured": true, 00:24:18.772 "data_offset": 0, 00:24:18.772 "data_size": 65536 00:24:18.772 }, 00:24:18.772 { 00:24:18.772 "name": "BaseBdev2", 00:24:18.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.772 "is_configured": false, 00:24:18.772 "data_offset": 0, 00:24:18.772 "data_size": 0 00:24:18.772 }, 00:24:18.772 { 00:24:18.772 "name": "BaseBdev3", 00:24:18.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.772 "is_configured": false, 00:24:18.772 "data_offset": 0, 00:24:18.772 "data_size": 0 00:24:18.772 } 00:24:18.772 ] 00:24:18.772 }' 00:24:18.772 05:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.772 05:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:19.338 05:06:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:19.905 [2024-04-27 05:06:49.524070] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:19.905 [2024-04-27 05:06:49.524447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:19.905 05:06:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:19.905 05:06:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:19.905 [2024-04-27 05:06:49.804287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.905 [2024-04-27 05:06:49.807207] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.905 [2024-04-27 05:06:49.807396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.905 [2024-04-27 05:06:49.807518] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:19.905 [2024-04-27 05:06:49.807590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.164 05:06:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.423 05:06:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.423 "name": "Existed_Raid", 00:24:20.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.423 "strip_size_kb": 64, 00:24:20.423 "state": "configuring", 00:24:20.423 "raid_level": "raid5f", 00:24:20.423 "superblock": false, 00:24:20.423 "num_base_bdevs": 3, 00:24:20.423 "num_base_bdevs_discovered": 1, 00:24:20.423 "num_base_bdevs_operational": 3, 00:24:20.423 "base_bdevs_list": [ 00:24:20.423 { 00:24:20.423 "name": "BaseBdev1", 00:24:20.423 "uuid": "d3f084e7-85e5-41cc-8141-b92b3114750d", 00:24:20.423 "is_configured": true, 00:24:20.423 "data_offset": 0, 00:24:20.423 "data_size": 65536 00:24:20.423 }, 00:24:20.423 { 00:24:20.423 "name": "BaseBdev2", 00:24:20.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.423 "is_configured": false, 00:24:20.423 "data_offset": 0, 00:24:20.423 "data_size": 0 00:24:20.423 }, 00:24:20.423 { 00:24:20.423 "name": "BaseBdev3", 00:24:20.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.423 "is_configured": false, 00:24:20.423 "data_offset": 0, 00:24:20.423 "data_size": 0 00:24:20.423 } 00:24:20.423 ] 00:24:20.423 }' 00:24:20.423 05:06:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.423 05:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:20.991 05:06:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:21.251 [2024-04-27 05:06:50.998185] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.251 BaseBdev2 00:24:21.251 05:06:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:21.251 05:06:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:21.251 05:06:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:21.251 05:06:51 -- common/autotest_common.sh@889 -- # local i 00:24:21.251 05:06:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:21.251 05:06:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:21.251 05:06:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.509 05:06:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:21.768 [ 00:24:21.768 { 00:24:21.768 "name": "BaseBdev2", 00:24:21.768 "aliases": [ 00:24:21.768 "3cf935da-83a1-42d1-9e87-85d98d777050" 00:24:21.768 ], 00:24:21.768 "product_name": "Malloc disk", 00:24:21.768 "block_size": 512, 00:24:21.768 "num_blocks": 65536, 00:24:21.768 "uuid": "3cf935da-83a1-42d1-9e87-85d98d777050", 00:24:21.768 "assigned_rate_limits": { 00:24:21.768 "rw_ios_per_sec": 0, 00:24:21.768 "rw_mbytes_per_sec": 0, 00:24:21.768 "r_mbytes_per_sec": 0, 00:24:21.768 "w_mbytes_per_sec": 0 00:24:21.768 }, 00:24:21.768 "claimed": true, 00:24:21.768 "claim_type": "exclusive_write", 00:24:21.768 "zoned": false, 00:24:21.768 "supported_io_types": { 00:24:21.768 "read": true, 00:24:21.768 "write": true, 00:24:21.768 "unmap": true, 00:24:21.768 "write_zeroes": true, 00:24:21.768 "flush": true, 00:24:21.768 "reset": true, 00:24:21.768 "compare": false, 00:24:21.768 "compare_and_write": false, 00:24:21.768 "abort": true, 00:24:21.768 "nvme_admin": false, 00:24:21.768 "nvme_io": false 00:24:21.768 }, 00:24:21.768 "memory_domains": [ 00:24:21.768 { 00:24:21.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.768 "dma_device_type": 2 00:24:21.768 } 00:24:21.768 ], 00:24:21.768 "driver_specific": {} 00:24:21.768 } 00:24:21.768 ] 00:24:21.768 05:06:51 -- common/autotest_common.sh@895 -- # return 0 00:24:21.768 05:06:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:21.768 05:06:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:21.768 05:06:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:21.768 05:06:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.769 05:06:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.028 05:06:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.028 "name": "Existed_Raid", 00:24:22.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.028 "strip_size_kb": 64, 00:24:22.028 "state": "configuring", 00:24:22.028 "raid_level": "raid5f", 00:24:22.028 "superblock": false, 00:24:22.028 "num_base_bdevs": 3, 00:24:22.028 "num_base_bdevs_discovered": 2, 00:24:22.028 "num_base_bdevs_operational": 3, 00:24:22.028 "base_bdevs_list": [ 00:24:22.028 { 00:24:22.028 "name": "BaseBdev1", 00:24:22.028 "uuid": "d3f084e7-85e5-41cc-8141-b92b3114750d", 00:24:22.028 "is_configured": true, 00:24:22.028 "data_offset": 0, 00:24:22.028 "data_size": 65536 00:24:22.028 }, 00:24:22.028 { 00:24:22.028 "name": "BaseBdev2", 00:24:22.028 "uuid": "3cf935da-83a1-42d1-9e87-85d98d777050", 00:24:22.028 "is_configured": true, 00:24:22.028 "data_offset": 0, 00:24:22.028 "data_size": 65536 00:24:22.028 }, 00:24:22.028 { 00:24:22.028 "name": "BaseBdev3", 00:24:22.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.028 "is_configured": false, 00:24:22.028 "data_offset": 0, 00:24:22.028 "data_size": 0 00:24:22.028 } 00:24:22.028 ] 00:24:22.028 }' 00:24:22.028 05:06:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.028 05:06:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.621 05:06:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:22.881 [2024-04-27 05:06:52.683090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:22.881 [2024-04-27 05:06:52.683234] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:24:22.881 [2024-04-27 05:06:52.683251] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:22.881 [2024-04-27 05:06:52.683391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:22.881 [2024-04-27 05:06:52.684334] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:24:22.881 [2024-04-27 05:06:52.684362] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:24:22.881 [2024-04-27 05:06:52.684694] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.881 BaseBdev3 00:24:22.881 05:06:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:22.881 05:06:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:22.881 05:06:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:22.881 05:06:52 -- common/autotest_common.sh@889 -- # local i 00:24:22.881 05:06:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:22.881 05:06:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:22.881 05:06:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.155 05:06:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:23.438 [ 00:24:23.438 { 00:24:23.438 "name": "BaseBdev3", 00:24:23.438 "aliases": [ 00:24:23.438 "ace4fffd-7fe2-44ad-871c-884ef8acbd7c" 00:24:23.438 ], 00:24:23.438 "product_name": "Malloc disk", 00:24:23.438 "block_size": 512, 00:24:23.438 "num_blocks": 65536, 00:24:23.438 "uuid": "ace4fffd-7fe2-44ad-871c-884ef8acbd7c", 00:24:23.438 "assigned_rate_limits": { 00:24:23.438 "rw_ios_per_sec": 0, 00:24:23.438 "rw_mbytes_per_sec": 0, 00:24:23.438 "r_mbytes_per_sec": 0, 00:24:23.438 "w_mbytes_per_sec": 0 00:24:23.438 }, 00:24:23.438 "claimed": true, 00:24:23.438 "claim_type": "exclusive_write", 00:24:23.438 "zoned": false, 00:24:23.438 "supported_io_types": { 00:24:23.438 "read": true, 00:24:23.438 "write": true, 00:24:23.438 "unmap": true, 00:24:23.438 "write_zeroes": true, 00:24:23.438 "flush": true, 00:24:23.438 "reset": true, 00:24:23.438 "compare": false, 00:24:23.438 "compare_and_write": false, 00:24:23.438 "abort": true, 00:24:23.438 "nvme_admin": false, 00:24:23.438 "nvme_io": false 00:24:23.438 }, 00:24:23.438 "memory_domains": [ 00:24:23.438 { 00:24:23.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.438 "dma_device_type": 2 00:24:23.438 } 00:24:23.438 ], 00:24:23.438 "driver_specific": {} 00:24:23.438 } 00:24:23.438 ] 00:24:23.438 05:06:53 -- common/autotest_common.sh@895 -- # return 0 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.438 05:06:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.699 05:06:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.699 "name": "Existed_Raid", 00:24:23.699 "uuid": "d26dca6e-eab8-4437-9340-02f041150ff6", 00:24:23.699 "strip_size_kb": 64, 00:24:23.699 "state": "online", 00:24:23.699 "raid_level": "raid5f", 00:24:23.699 "superblock": false, 00:24:23.699 "num_base_bdevs": 3, 00:24:23.699 "num_base_bdevs_discovered": 3, 00:24:23.699 "num_base_bdevs_operational": 3, 00:24:23.699 "base_bdevs_list": [ 00:24:23.699 { 00:24:23.699 "name": "BaseBdev1", 00:24:23.699 "uuid": "d3f084e7-85e5-41cc-8141-b92b3114750d", 00:24:23.699 "is_configured": true, 00:24:23.699 "data_offset": 0, 00:24:23.699 "data_size": 65536 00:24:23.699 }, 00:24:23.699 { 00:24:23.699 "name": "BaseBdev2", 00:24:23.699 "uuid": "3cf935da-83a1-42d1-9e87-85d98d777050", 00:24:23.699 "is_configured": true, 00:24:23.699 "data_offset": 0, 00:24:23.699 "data_size": 65536 00:24:23.699 }, 00:24:23.699 { 00:24:23.699 "name": "BaseBdev3", 00:24:23.699 "uuid": "ace4fffd-7fe2-44ad-871c-884ef8acbd7c", 00:24:23.699 "is_configured": true, 00:24:23.699 "data_offset": 0, 00:24:23.699 "data_size": 65536 00:24:23.699 } 00:24:23.699 ] 00:24:23.699 }' 00:24:23.699 05:06:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.699 05:06:53 -- common/autotest_common.sh@10 -- # set +x 00:24:24.353 05:06:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:24.611 [2024-04-27 05:06:54.404175] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:24:24.611 05:06:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.612 05:06:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.870 05:06:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.870 "name": "Existed_Raid", 00:24:24.870 "uuid": "d26dca6e-eab8-4437-9340-02f041150ff6", 00:24:24.870 "strip_size_kb": 64, 00:24:24.870 "state": "online", 00:24:24.870 "raid_level": "raid5f", 00:24:24.870 "superblock": false, 00:24:24.870 "num_base_bdevs": 3, 00:24:24.870 "num_base_bdevs_discovered": 2, 00:24:24.870 "num_base_bdevs_operational": 2, 00:24:24.870 "base_bdevs_list": [ 00:24:24.870 { 00:24:24.870 "name": null, 00:24:24.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.870 "is_configured": false, 00:24:24.870 "data_offset": 0, 00:24:24.870 "data_size": 65536 00:24:24.870 }, 00:24:24.870 { 00:24:24.870 "name": "BaseBdev2", 00:24:24.870 "uuid": "3cf935da-83a1-42d1-9e87-85d98d777050", 00:24:24.870 "is_configured": true, 00:24:24.870 "data_offset": 0, 00:24:24.870 "data_size": 65536 00:24:24.870 }, 00:24:24.870 { 00:24:24.871 "name": "BaseBdev3", 00:24:24.871 "uuid": "ace4fffd-7fe2-44ad-871c-884ef8acbd7c", 00:24:24.871 "is_configured": true, 00:24:24.871 "data_offset": 0, 00:24:24.871 "data_size": 65536 00:24:24.871 } 00:24:24.871 ] 00:24:24.871 }' 00:24:24.871 05:06:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.871 05:06:54 -- common/autotest_common.sh@10 -- # set +x 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:25.806 05:06:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:26.065 [2024-04-27 05:06:55.921175] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:26.065 [2024-04-27 05:06:55.921241] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:26.065 [2024-04-27 05:06:55.921339] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:26.065 05:06:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:26.065 05:06:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:26.065 05:06:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.065 05:06:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:26.632 05:06:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:26.633 [2024-04-27 05:06:56.470729] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:26.633 [2024-04-27 05:06:56.470843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.633 05:06:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:26.891 05:06:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:26.891 05:06:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:26.891 05:06:56 -- bdev/bdev_raid.sh@287 -- # killprocess 139316 00:24:26.891 05:06:56 -- common/autotest_common.sh@926 -- # '[' -z 139316 ']' 00:24:26.891 05:06:56 -- common/autotest_common.sh@930 -- # kill -0 139316 00:24:26.891 05:06:56 -- common/autotest_common.sh@931 -- # uname 00:24:26.891 05:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:26.891 05:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139316 00:24:26.891 killing process with pid 139316 00:24:26.891 05:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:26.891 05:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:26.891 05:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139316' 00:24:26.891 05:06:56 -- common/autotest_common.sh@945 -- # kill 139316 00:24:26.891 05:06:56 -- common/autotest_common.sh@950 -- # wait 139316 00:24:26.891 [2024-04-27 05:06:56.774029] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:26.891 [2024-04-27 05:06:56.774135] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:27.458 ************************************ 00:24:27.458 END TEST raid5f_state_function_test 00:24:27.458 ************************************ 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:27.458 00:24:27.458 real 0m12.141s 00:24:27.458 user 0m22.101s 00:24:27.458 sys 0m1.641s 00:24:27.458 05:06:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.458 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:24:27.458 05:06:57 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:27.458 05:06:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:27.458 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.458 ************************************ 00:24:27.458 START TEST raid5f_state_function_test_sb 00:24:27.458 ************************************ 00:24:27.458 05:06:57 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:27.458 05:06:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=139693 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139693' 00:24:27.459 Process raid pid: 139693 00:24:27.459 05:06:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139693 /var/tmp/spdk-raid.sock 00:24:27.459 05:06:57 -- common/autotest_common.sh@819 -- # '[' -z 139693 ']' 00:24:27.459 05:06:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:27.459 05:06:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:27.459 05:06:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:27.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:27.459 05:06:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:27.459 05:06:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.459 [2024-04-27 05:06:57.254261] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:27.459 [2024-04-27 05:06:57.254519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.717 [2024-04-27 05:06:57.421992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.717 [2024-04-27 05:06:57.569564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.975 [2024-04-27 05:06:57.662866] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:28.542 05:06:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:28.542 05:06:58 -- common/autotest_common.sh@852 -- # return 0 00:24:28.542 05:06:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:28.800 [2024-04-27 05:06:58.476169] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:28.800 [2024-04-27 05:06:58.476283] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:28.800 [2024-04-27 05:06:58.476299] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.800 [2024-04-27 05:06:58.476322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.800 [2024-04-27 05:06:58.476331] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:28.800 [2024-04-27 05:06:58.476382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.800 05:06:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.057 05:06:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:29.057 "name": "Existed_Raid", 00:24:29.057 "uuid": "ca7f590e-3c75-46fe-a82a-7b869ed1143b", 00:24:29.057 "strip_size_kb": 64, 00:24:29.057 "state": "configuring", 00:24:29.057 "raid_level": "raid5f", 00:24:29.057 "superblock": true, 00:24:29.057 "num_base_bdevs": 3, 00:24:29.057 "num_base_bdevs_discovered": 0, 00:24:29.057 "num_base_bdevs_operational": 3, 00:24:29.057 "base_bdevs_list": [ 00:24:29.057 { 00:24:29.057 "name": "BaseBdev1", 00:24:29.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.057 "is_configured": false, 00:24:29.057 "data_offset": 0, 00:24:29.057 "data_size": 0 00:24:29.057 }, 00:24:29.057 { 00:24:29.057 "name": "BaseBdev2", 00:24:29.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.057 "is_configured": false, 00:24:29.057 "data_offset": 0, 00:24:29.057 "data_size": 0 00:24:29.057 }, 00:24:29.057 { 00:24:29.057 "name": "BaseBdev3", 00:24:29.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.057 "is_configured": false, 00:24:29.057 "data_offset": 0, 00:24:29.057 "data_size": 0 00:24:29.057 } 00:24:29.057 ] 00:24:29.057 }' 00:24:29.057 05:06:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:29.057 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:24:29.622 05:06:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.880 [2024-04-27 05:06:59.676221] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.880 [2024-04-27 05:06:59.676291] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:29.880 05:06:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:30.138 [2024-04-27 05:06:59.916333] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:30.138 [2024-04-27 05:06:59.916445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:30.138 [2024-04-27 05:06:59.916462] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:30.138 [2024-04-27 05:06:59.916494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:30.138 [2024-04-27 05:06:59.916504] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:30.138 [2024-04-27 05:06:59.916535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:30.138 05:06:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:30.397 [2024-04-27 05:07:00.188544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.397 BaseBdev1 00:24:30.397 05:07:00 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:30.397 05:07:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:30.397 05:07:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:30.397 05:07:00 -- common/autotest_common.sh@889 -- # local i 00:24:30.397 05:07:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:30.397 05:07:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:30.397 05:07:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.656 05:07:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:30.914 [ 00:24:30.914 { 00:24:30.914 "name": "BaseBdev1", 00:24:30.914 "aliases": [ 00:24:30.914 "5fa08ca5-f1cf-43be-961c-1502361a4e14" 00:24:30.914 ], 00:24:30.914 "product_name": "Malloc disk", 00:24:30.914 "block_size": 512, 00:24:30.914 "num_blocks": 65536, 00:24:30.914 "uuid": "5fa08ca5-f1cf-43be-961c-1502361a4e14", 00:24:30.914 "assigned_rate_limits": { 00:24:30.914 "rw_ios_per_sec": 0, 00:24:30.914 "rw_mbytes_per_sec": 0, 00:24:30.914 "r_mbytes_per_sec": 0, 00:24:30.914 "w_mbytes_per_sec": 0 00:24:30.914 }, 00:24:30.914 "claimed": true, 00:24:30.914 "claim_type": "exclusive_write", 00:24:30.914 "zoned": false, 00:24:30.914 "supported_io_types": { 00:24:30.914 "read": true, 00:24:30.914 "write": true, 00:24:30.914 "unmap": true, 00:24:30.914 "write_zeroes": true, 00:24:30.914 "flush": true, 00:24:30.914 "reset": true, 00:24:30.914 "compare": false, 00:24:30.914 "compare_and_write": false, 00:24:30.914 "abort": true, 00:24:30.914 "nvme_admin": false, 00:24:30.914 "nvme_io": false 00:24:30.914 }, 00:24:30.914 "memory_domains": [ 00:24:30.914 { 00:24:30.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.914 "dma_device_type": 2 00:24:30.914 } 00:24:30.914 ], 00:24:30.914 "driver_specific": {} 00:24:30.914 } 00:24:30.914 ] 00:24:30.914 05:07:00 -- common/autotest_common.sh@895 -- # return 0 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.914 05:07:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.172 05:07:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.172 "name": "Existed_Raid", 00:24:31.172 "uuid": "794883d4-d96f-4c88-934f-50756612467c", 00:24:31.172 "strip_size_kb": 64, 00:24:31.172 "state": "configuring", 00:24:31.172 "raid_level": "raid5f", 00:24:31.172 "superblock": true, 00:24:31.172 "num_base_bdevs": 3, 00:24:31.172 "num_base_bdevs_discovered": 1, 00:24:31.172 "num_base_bdevs_operational": 3, 00:24:31.172 "base_bdevs_list": [ 00:24:31.172 { 00:24:31.172 "name": "BaseBdev1", 00:24:31.172 "uuid": "5fa08ca5-f1cf-43be-961c-1502361a4e14", 00:24:31.172 "is_configured": true, 00:24:31.172 "data_offset": 2048, 00:24:31.172 "data_size": 63488 00:24:31.172 }, 00:24:31.172 { 00:24:31.172 "name": "BaseBdev2", 00:24:31.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.172 "is_configured": false, 00:24:31.172 "data_offset": 0, 00:24:31.172 "data_size": 0 00:24:31.172 }, 00:24:31.172 { 00:24:31.172 "name": "BaseBdev3", 00:24:31.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.172 "is_configured": false, 00:24:31.172 "data_offset": 0, 00:24:31.172 "data_size": 0 00:24:31.172 } 00:24:31.172 ] 00:24:31.172 }' 00:24:31.172 05:07:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.172 05:07:00 -- common/autotest_common.sh@10 -- # set +x 00:24:31.738 05:07:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.996 [2024-04-27 05:07:01.897112] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.996 [2024-04-27 05:07:01.897219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:32.255 05:07:01 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:32.255 05:07:01 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:32.513 05:07:02 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:32.771 BaseBdev1 00:24:32.771 05:07:02 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:32.771 05:07:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:32.771 05:07:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:32.771 05:07:02 -- common/autotest_common.sh@889 -- # local i 00:24:32.771 05:07:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:32.771 05:07:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:32.771 05:07:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.029 05:07:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:33.286 [ 00:24:33.286 { 00:24:33.286 "name": "BaseBdev1", 00:24:33.286 "aliases": [ 00:24:33.286 "ef31780d-15b0-4679-884e-0ef4a0ff2781" 00:24:33.286 ], 00:24:33.286 "product_name": "Malloc disk", 00:24:33.286 "block_size": 512, 00:24:33.286 "num_blocks": 65536, 00:24:33.286 "uuid": "ef31780d-15b0-4679-884e-0ef4a0ff2781", 00:24:33.286 "assigned_rate_limits": { 00:24:33.286 "rw_ios_per_sec": 0, 00:24:33.286 "rw_mbytes_per_sec": 0, 00:24:33.286 "r_mbytes_per_sec": 0, 00:24:33.286 "w_mbytes_per_sec": 0 00:24:33.286 }, 00:24:33.286 "claimed": false, 00:24:33.286 "zoned": false, 00:24:33.286 "supported_io_types": { 00:24:33.286 "read": true, 00:24:33.286 "write": true, 00:24:33.286 "unmap": true, 00:24:33.286 "write_zeroes": true, 00:24:33.286 "flush": true, 00:24:33.286 "reset": true, 00:24:33.286 "compare": false, 00:24:33.287 "compare_and_write": false, 00:24:33.287 "abort": true, 00:24:33.287 "nvme_admin": false, 00:24:33.287 "nvme_io": false 00:24:33.287 }, 00:24:33.287 "memory_domains": [ 00:24:33.287 { 00:24:33.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.287 "dma_device_type": 2 00:24:33.287 } 00:24:33.287 ], 00:24:33.287 "driver_specific": {} 00:24:33.287 } 00:24:33.287 ] 00:24:33.287 05:07:03 -- common/autotest_common.sh@895 -- # return 0 00:24:33.287 05:07:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:33.545 [2024-04-27 05:07:03.269685] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:33.545 [2024-04-27 05:07:03.272180] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:33.545 [2024-04-27 05:07:03.272259] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:33.545 [2024-04-27 05:07:03.272274] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:33.545 [2024-04-27 05:07:03.272304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.545 05:07:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.803 05:07:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.803 "name": "Existed_Raid", 00:24:33.803 "uuid": "1de318bf-08c9-4a35-b5ac-a490360a6a05", 00:24:33.803 "strip_size_kb": 64, 00:24:33.803 "state": "configuring", 00:24:33.803 "raid_level": "raid5f", 00:24:33.803 "superblock": true, 00:24:33.803 "num_base_bdevs": 3, 00:24:33.803 "num_base_bdevs_discovered": 1, 00:24:33.803 "num_base_bdevs_operational": 3, 00:24:33.803 "base_bdevs_list": [ 00:24:33.803 { 00:24:33.803 "name": "BaseBdev1", 00:24:33.803 "uuid": "ef31780d-15b0-4679-884e-0ef4a0ff2781", 00:24:33.803 "is_configured": true, 00:24:33.803 "data_offset": 2048, 00:24:33.803 "data_size": 63488 00:24:33.803 }, 00:24:33.803 { 00:24:33.803 "name": "BaseBdev2", 00:24:33.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.803 "is_configured": false, 00:24:33.803 "data_offset": 0, 00:24:33.803 "data_size": 0 00:24:33.803 }, 00:24:33.803 { 00:24:33.803 "name": "BaseBdev3", 00:24:33.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.803 "is_configured": false, 00:24:33.803 "data_offset": 0, 00:24:33.803 "data_size": 0 00:24:33.803 } 00:24:33.803 ] 00:24:33.803 }' 00:24:33.803 05:07:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.803 05:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:34.383 05:07:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:34.655 [2024-04-27 05:07:04.454206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:34.655 BaseBdev2 00:24:34.655 05:07:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:34.655 05:07:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:34.655 05:07:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:34.655 05:07:04 -- common/autotest_common.sh@889 -- # local i 00:24:34.655 05:07:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:34.655 05:07:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:34.655 05:07:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.913 05:07:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:35.172 [ 00:24:35.172 { 00:24:35.172 "name": "BaseBdev2", 00:24:35.172 "aliases": [ 00:24:35.172 "bb175904-f4f0-44eb-8efc-9d5203e5c96e" 00:24:35.172 ], 00:24:35.172 "product_name": "Malloc disk", 00:24:35.172 "block_size": 512, 00:24:35.172 "num_blocks": 65536, 00:24:35.172 "uuid": "bb175904-f4f0-44eb-8efc-9d5203e5c96e", 00:24:35.172 "assigned_rate_limits": { 00:24:35.172 "rw_ios_per_sec": 0, 00:24:35.172 "rw_mbytes_per_sec": 0, 00:24:35.172 "r_mbytes_per_sec": 0, 00:24:35.172 "w_mbytes_per_sec": 0 00:24:35.172 }, 00:24:35.172 "claimed": true, 00:24:35.172 "claim_type": "exclusive_write", 00:24:35.172 "zoned": false, 00:24:35.172 "supported_io_types": { 00:24:35.172 "read": true, 00:24:35.172 "write": true, 00:24:35.172 "unmap": true, 00:24:35.172 "write_zeroes": true, 00:24:35.172 "flush": true, 00:24:35.172 "reset": true, 00:24:35.172 "compare": false, 00:24:35.172 "compare_and_write": false, 00:24:35.172 "abort": true, 00:24:35.172 "nvme_admin": false, 00:24:35.172 "nvme_io": false 00:24:35.172 }, 00:24:35.172 "memory_domains": [ 00:24:35.172 { 00:24:35.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.172 "dma_device_type": 2 00:24:35.172 } 00:24:35.172 ], 00:24:35.172 "driver_specific": {} 00:24:35.172 } 00:24:35.172 ] 00:24:35.172 05:07:05 -- common/autotest_common.sh@895 -- # return 0 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.172 05:07:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.430 05:07:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.430 "name": "Existed_Raid", 00:24:35.430 "uuid": "1de318bf-08c9-4a35-b5ac-a490360a6a05", 00:24:35.430 "strip_size_kb": 64, 00:24:35.430 "state": "configuring", 00:24:35.430 "raid_level": "raid5f", 00:24:35.430 "superblock": true, 00:24:35.430 "num_base_bdevs": 3, 00:24:35.430 "num_base_bdevs_discovered": 2, 00:24:35.430 "num_base_bdevs_operational": 3, 00:24:35.431 "base_bdevs_list": [ 00:24:35.431 { 00:24:35.431 "name": "BaseBdev1", 00:24:35.431 "uuid": "ef31780d-15b0-4679-884e-0ef4a0ff2781", 00:24:35.431 "is_configured": true, 00:24:35.431 "data_offset": 2048, 00:24:35.431 "data_size": 63488 00:24:35.431 }, 00:24:35.431 { 00:24:35.431 "name": "BaseBdev2", 00:24:35.431 "uuid": "bb175904-f4f0-44eb-8efc-9d5203e5c96e", 00:24:35.431 "is_configured": true, 00:24:35.431 "data_offset": 2048, 00:24:35.431 "data_size": 63488 00:24:35.431 }, 00:24:35.431 { 00:24:35.431 "name": "BaseBdev3", 00:24:35.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.431 "is_configured": false, 00:24:35.431 "data_offset": 0, 00:24:35.431 "data_size": 0 00:24:35.431 } 00:24:35.431 ] 00:24:35.431 }' 00:24:35.431 05:07:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.431 05:07:05 -- common/autotest_common.sh@10 -- # set +x 00:24:36.374 05:07:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:36.374 [2024-04-27 05:07:06.195554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:36.374 [2024-04-27 05:07:06.195886] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:36.374 [2024-04-27 05:07:06.195904] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:36.374 [2024-04-27 05:07:06.196074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:36.374 [2024-04-27 05:07:06.197033] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:36.374 [2024-04-27 05:07:06.197062] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:36.374 [2024-04-27 05:07:06.197241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.374 BaseBdev3 00:24:36.374 05:07:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:36.374 05:07:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:36.374 05:07:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:36.374 05:07:06 -- common/autotest_common.sh@889 -- # local i 00:24:36.374 05:07:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:36.374 05:07:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:36.374 05:07:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.632 05:07:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:36.889 [ 00:24:36.889 { 00:24:36.889 "name": "BaseBdev3", 00:24:36.889 "aliases": [ 00:24:36.889 "7330035b-0818-4ba5-a254-0c58fc74ef0e" 00:24:36.889 ], 00:24:36.889 "product_name": "Malloc disk", 00:24:36.889 "block_size": 512, 00:24:36.889 "num_blocks": 65536, 00:24:36.889 "uuid": "7330035b-0818-4ba5-a254-0c58fc74ef0e", 00:24:36.889 "assigned_rate_limits": { 00:24:36.889 "rw_ios_per_sec": 0, 00:24:36.889 "rw_mbytes_per_sec": 0, 00:24:36.889 "r_mbytes_per_sec": 0, 00:24:36.889 "w_mbytes_per_sec": 0 00:24:36.889 }, 00:24:36.889 "claimed": true, 00:24:36.889 "claim_type": "exclusive_write", 00:24:36.889 "zoned": false, 00:24:36.889 "supported_io_types": { 00:24:36.889 "read": true, 00:24:36.889 "write": true, 00:24:36.889 "unmap": true, 00:24:36.889 "write_zeroes": true, 00:24:36.889 "flush": true, 00:24:36.889 "reset": true, 00:24:36.889 "compare": false, 00:24:36.889 "compare_and_write": false, 00:24:36.889 "abort": true, 00:24:36.889 "nvme_admin": false, 00:24:36.889 "nvme_io": false 00:24:36.889 }, 00:24:36.889 "memory_domains": [ 00:24:36.889 { 00:24:36.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.889 "dma_device_type": 2 00:24:36.889 } 00:24:36.889 ], 00:24:36.889 "driver_specific": {} 00:24:36.889 } 00:24:36.889 ] 00:24:36.889 05:07:06 -- common/autotest_common.sh@895 -- # return 0 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.889 05:07:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.147 05:07:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.147 "name": "Existed_Raid", 00:24:37.147 "uuid": "1de318bf-08c9-4a35-b5ac-a490360a6a05", 00:24:37.147 "strip_size_kb": 64, 00:24:37.147 "state": "online", 00:24:37.147 "raid_level": "raid5f", 00:24:37.147 "superblock": true, 00:24:37.147 "num_base_bdevs": 3, 00:24:37.147 "num_base_bdevs_discovered": 3, 00:24:37.147 "num_base_bdevs_operational": 3, 00:24:37.147 "base_bdevs_list": [ 00:24:37.147 { 00:24:37.147 "name": "BaseBdev1", 00:24:37.147 "uuid": "ef31780d-15b0-4679-884e-0ef4a0ff2781", 00:24:37.147 "is_configured": true, 00:24:37.147 "data_offset": 2048, 00:24:37.147 "data_size": 63488 00:24:37.147 }, 00:24:37.147 { 00:24:37.147 "name": "BaseBdev2", 00:24:37.147 "uuid": "bb175904-f4f0-44eb-8efc-9d5203e5c96e", 00:24:37.147 "is_configured": true, 00:24:37.147 "data_offset": 2048, 00:24:37.147 "data_size": 63488 00:24:37.147 }, 00:24:37.147 { 00:24:37.147 "name": "BaseBdev3", 00:24:37.147 "uuid": "7330035b-0818-4ba5-a254-0c58fc74ef0e", 00:24:37.147 "is_configured": true, 00:24:37.147 "data_offset": 2048, 00:24:37.147 "data_size": 63488 00:24:37.147 } 00:24:37.147 ] 00:24:37.147 }' 00:24:37.147 05:07:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.147 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:24:37.722 05:07:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:37.980 [2024-04-27 05:07:07.848434] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:37.980 05:07:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.238 05:07:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.238 05:07:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.495 05:07:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.495 "name": "Existed_Raid", 00:24:38.495 "uuid": "1de318bf-08c9-4a35-b5ac-a490360a6a05", 00:24:38.495 "strip_size_kb": 64, 00:24:38.495 "state": "online", 00:24:38.495 "raid_level": "raid5f", 00:24:38.495 "superblock": true, 00:24:38.495 "num_base_bdevs": 3, 00:24:38.495 "num_base_bdevs_discovered": 2, 00:24:38.495 "num_base_bdevs_operational": 2, 00:24:38.496 "base_bdevs_list": [ 00:24:38.496 { 00:24:38.496 "name": null, 00:24:38.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.496 "is_configured": false, 00:24:38.496 "data_offset": 2048, 00:24:38.496 "data_size": 63488 00:24:38.496 }, 00:24:38.496 { 00:24:38.496 "name": "BaseBdev2", 00:24:38.496 "uuid": "bb175904-f4f0-44eb-8efc-9d5203e5c96e", 00:24:38.496 "is_configured": true, 00:24:38.496 "data_offset": 2048, 00:24:38.496 "data_size": 63488 00:24:38.496 }, 00:24:38.496 { 00:24:38.496 "name": "BaseBdev3", 00:24:38.496 "uuid": "7330035b-0818-4ba5-a254-0c58fc74ef0e", 00:24:38.496 "is_configured": true, 00:24:38.496 "data_offset": 2048, 00:24:38.496 "data_size": 63488 00:24:38.496 } 00:24:38.496 ] 00:24:38.496 }' 00:24:38.496 05:07:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.496 05:07:08 -- common/autotest_common.sh@10 -- # set +x 00:24:39.061 05:07:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:39.061 05:07:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:39.061 05:07:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:39.061 05:07:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.318 05:07:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:39.318 05:07:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:39.318 05:07:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:39.576 [2024-04-27 05:07:09.344861] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:39.576 [2024-04-27 05:07:09.344922] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:39.576 [2024-04-27 05:07:09.345023] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:39.576 05:07:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:39.576 05:07:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:39.576 05:07:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:39.576 05:07:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.834 05:07:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:39.834 05:07:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:39.835 05:07:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:40.093 [2024-04-27 05:07:09.865307] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:40.093 [2024-04-27 05:07:09.865427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:40.093 05:07:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:40.093 05:07:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:40.093 05:07:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.093 05:07:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:40.351 05:07:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:40.351 05:07:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:40.351 05:07:10 -- bdev/bdev_raid.sh@287 -- # killprocess 139693 00:24:40.351 05:07:10 -- common/autotest_common.sh@926 -- # '[' -z 139693 ']' 00:24:40.351 05:07:10 -- common/autotest_common.sh@930 -- # kill -0 139693 00:24:40.351 05:07:10 -- common/autotest_common.sh@931 -- # uname 00:24:40.351 05:07:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:40.351 05:07:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139693 00:24:40.351 05:07:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:40.351 05:07:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:40.351 05:07:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139693' 00:24:40.351 killing process with pid 139693 00:24:40.351 05:07:10 -- common/autotest_common.sh@945 -- # kill 139693 00:24:40.351 05:07:10 -- common/autotest_common.sh@950 -- # wait 139693 00:24:40.351 [2024-04-27 05:07:10.169595] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:40.351 [2024-04-27 05:07:10.169699] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:40.610 05:07:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:40.610 00:24:40.610 real 0m13.328s 00:24:40.610 user 0m24.120s 00:24:40.610 sys 0m1.918s 00:24:40.610 05:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.610 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.610 ************************************ 00:24:40.610 END TEST raid5f_state_function_test_sb 00:24:40.610 ************************************ 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:24:40.869 05:07:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:40.869 05:07:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:40.869 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.869 ************************************ 00:24:40.869 START TEST raid5f_superblock_test 00:24:40.869 ************************************ 00:24:40.869 05:07:10 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@357 -- # raid_pid=140085 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:40.869 05:07:10 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140085 /var/tmp/spdk-raid.sock 00:24:40.869 05:07:10 -- common/autotest_common.sh@819 -- # '[' -z 140085 ']' 00:24:40.869 05:07:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:40.869 05:07:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.869 05:07:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:40.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:40.869 05:07:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.869 05:07:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.869 [2024-04-27 05:07:10.645291] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:40.869 [2024-04-27 05:07:10.645552] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140085 ] 00:24:41.128 [2024-04-27 05:07:10.810724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.128 [2024-04-27 05:07:10.931216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.128 [2024-04-27 05:07:11.008524] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:42.063 05:07:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:42.063 05:07:11 -- common/autotest_common.sh@852 -- # return 0 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:42.063 malloc1 00:24:42.063 05:07:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:42.322 [2024-04-27 05:07:12.134215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:42.322 [2024-04-27 05:07:12.134388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.322 [2024-04-27 05:07:12.134462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:42.322 [2024-04-27 05:07:12.134545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.322 [2024-04-27 05:07:12.137593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.322 [2024-04-27 05:07:12.137668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:42.322 pt1 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:42.322 05:07:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:42.581 malloc2 00:24:42.581 05:07:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:42.839 [2024-04-27 05:07:12.617165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:42.839 [2024-04-27 05:07:12.617286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.839 [2024-04-27 05:07:12.617350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:42.839 [2024-04-27 05:07:12.617427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.839 [2024-04-27 05:07:12.620305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.839 [2024-04-27 05:07:12.620369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:42.839 pt2 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:42.839 05:07:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:43.098 malloc3 00:24:43.098 05:07:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:43.355 [2024-04-27 05:07:13.199671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:43.355 [2024-04-27 05:07:13.199802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.355 [2024-04-27 05:07:13.199866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:43.355 [2024-04-27 05:07:13.199924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.355 [2024-04-27 05:07:13.202772] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.355 [2024-04-27 05:07:13.202839] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:43.355 pt3 00:24:43.355 05:07:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:43.355 05:07:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:43.355 05:07:13 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:24:43.612 [2024-04-27 05:07:13.439816] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:43.612 [2024-04-27 05:07:13.442350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:43.612 [2024-04-27 05:07:13.442455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:43.612 [2024-04-27 05:07:13.442731] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:24:43.612 [2024-04-27 05:07:13.442759] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:43.612 [2024-04-27 05:07:13.442940] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:43.612 [2024-04-27 05:07:13.443828] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:24:43.612 [2024-04-27 05:07:13.443856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:24:43.612 [2024-04-27 05:07:13.444098] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.612 05:07:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.613 05:07:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.613 05:07:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.870 05:07:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.870 "name": "raid_bdev1", 00:24:43.870 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:43.870 "strip_size_kb": 64, 00:24:43.870 "state": "online", 00:24:43.870 "raid_level": "raid5f", 00:24:43.870 "superblock": true, 00:24:43.870 "num_base_bdevs": 3, 00:24:43.870 "num_base_bdevs_discovered": 3, 00:24:43.870 "num_base_bdevs_operational": 3, 00:24:43.871 "base_bdevs_list": [ 00:24:43.871 { 00:24:43.871 "name": "pt1", 00:24:43.871 "uuid": "5e48c1d0-6bfb-5ef0-bb71-fa63679411f1", 00:24:43.871 "is_configured": true, 00:24:43.871 "data_offset": 2048, 00:24:43.871 "data_size": 63488 00:24:43.871 }, 00:24:43.871 { 00:24:43.871 "name": "pt2", 00:24:43.871 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:43.871 "is_configured": true, 00:24:43.871 "data_offset": 2048, 00:24:43.871 "data_size": 63488 00:24:43.871 }, 00:24:43.871 { 00:24:43.871 "name": "pt3", 00:24:43.871 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:43.871 "is_configured": true, 00:24:43.871 "data_offset": 2048, 00:24:43.871 "data_size": 63488 00:24:43.871 } 00:24:43.871 ] 00:24:43.871 }' 00:24:43.871 05:07:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.871 05:07:13 -- common/autotest_common.sh@10 -- # set +x 00:24:44.804 05:07:14 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:44.804 05:07:14 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:44.804 [2024-04-27 05:07:14.604596] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:44.804 05:07:14 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7971b4e1-7709-46ff-adaa-c79f43adb011 00:24:44.804 05:07:14 -- bdev/bdev_raid.sh@380 -- # '[' -z 7971b4e1-7709-46ff-adaa-c79f43adb011 ']' 00:24:44.804 05:07:14 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:45.062 [2024-04-27 05:07:14.888438] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.062 [2024-04-27 05:07:14.888487] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.062 [2024-04-27 05:07:14.888621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.062 [2024-04-27 05:07:14.888737] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.062 [2024-04-27 05:07:14.888755] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:24:45.062 05:07:14 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.062 05:07:14 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:45.320 05:07:15 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:45.320 05:07:15 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:45.320 05:07:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:45.320 05:07:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:45.579 05:07:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:45.579 05:07:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:45.837 05:07:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:45.837 05:07:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:46.095 05:07:15 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:46.095 05:07:15 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:46.354 05:07:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:46.354 05:07:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:46.354 05:07:16 -- common/autotest_common.sh@640 -- # local es=0 00:24:46.354 05:07:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:46.354 05:07:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.354 05:07:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:46.354 05:07:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.354 05:07:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:46.354 05:07:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.354 05:07:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:46.354 05:07:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.354 05:07:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:46.354 05:07:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:46.613 [2024-04-27 05:07:16.416809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:46.613 [2024-04-27 05:07:16.419334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:46.613 [2024-04-27 05:07:16.419404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:46.613 [2024-04-27 05:07:16.419477] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:46.613 [2024-04-27 05:07:16.419596] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:46.613 [2024-04-27 05:07:16.419637] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:46.613 [2024-04-27 05:07:16.419696] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.613 [2024-04-27 05:07:16.419712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:24:46.613 request: 00:24:46.613 { 00:24:46.613 "name": "raid_bdev1", 00:24:46.613 "raid_level": "raid5f", 00:24:46.613 "base_bdevs": [ 00:24:46.613 "malloc1", 00:24:46.613 "malloc2", 00:24:46.613 "malloc3" 00:24:46.613 ], 00:24:46.613 "superblock": false, 00:24:46.613 "strip_size_kb": 64, 00:24:46.613 "method": "bdev_raid_create", 00:24:46.613 "req_id": 1 00:24:46.613 } 00:24:46.613 Got JSON-RPC error response 00:24:46.613 response: 00:24:46.613 { 00:24:46.613 "code": -17, 00:24:46.613 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:46.613 } 00:24:46.613 05:07:16 -- common/autotest_common.sh@643 -- # es=1 00:24:46.613 05:07:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:46.613 05:07:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:46.613 05:07:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:46.613 05:07:16 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.613 05:07:16 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:46.871 05:07:16 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:46.872 05:07:16 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:46.872 05:07:16 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:47.129 [2024-04-27 05:07:16.940878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:47.130 [2024-04-27 05:07:16.941007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.130 [2024-04-27 05:07:16.941061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:47.130 [2024-04-27 05:07:16.941089] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.130 [2024-04-27 05:07:16.943964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.130 [2024-04-27 05:07:16.944025] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:47.130 [2024-04-27 05:07:16.944172] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:47.130 [2024-04-27 05:07:16.944236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:47.130 pt1 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.130 05:07:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.388 05:07:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.388 "name": "raid_bdev1", 00:24:47.388 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:47.388 "strip_size_kb": 64, 00:24:47.388 "state": "configuring", 00:24:47.388 "raid_level": "raid5f", 00:24:47.388 "superblock": true, 00:24:47.388 "num_base_bdevs": 3, 00:24:47.388 "num_base_bdevs_discovered": 1, 00:24:47.388 "num_base_bdevs_operational": 3, 00:24:47.388 "base_bdevs_list": [ 00:24:47.388 { 00:24:47.388 "name": "pt1", 00:24:47.388 "uuid": "5e48c1d0-6bfb-5ef0-bb71-fa63679411f1", 00:24:47.388 "is_configured": true, 00:24:47.388 "data_offset": 2048, 00:24:47.388 "data_size": 63488 00:24:47.388 }, 00:24:47.388 { 00:24:47.388 "name": null, 00:24:47.388 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:47.388 "is_configured": false, 00:24:47.388 "data_offset": 2048, 00:24:47.388 "data_size": 63488 00:24:47.388 }, 00:24:47.388 { 00:24:47.388 "name": null, 00:24:47.388 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:47.388 "is_configured": false, 00:24:47.388 "data_offset": 2048, 00:24:47.388 "data_size": 63488 00:24:47.388 } 00:24:47.388 ] 00:24:47.388 }' 00:24:47.388 05:07:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.388 05:07:17 -- common/autotest_common.sh@10 -- # set +x 00:24:47.954 05:07:17 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:24:48.213 05:07:17 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:48.213 [2024-04-27 05:07:18.077178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:48.213 [2024-04-27 05:07:18.077319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.213 [2024-04-27 05:07:18.077392] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:48.213 [2024-04-27 05:07:18.077421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.213 [2024-04-27 05:07:18.077972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.213 [2024-04-27 05:07:18.078027] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:48.213 [2024-04-27 05:07:18.078159] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:48.213 [2024-04-27 05:07:18.078193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:48.213 pt2 00:24:48.213 05:07:18 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:48.472 [2024-04-27 05:07:18.321405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.472 05:07:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.731 05:07:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.731 "name": "raid_bdev1", 00:24:48.731 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:48.731 "strip_size_kb": 64, 00:24:48.732 "state": "configuring", 00:24:48.732 "raid_level": "raid5f", 00:24:48.732 "superblock": true, 00:24:48.732 "num_base_bdevs": 3, 00:24:48.732 "num_base_bdevs_discovered": 1, 00:24:48.732 "num_base_bdevs_operational": 3, 00:24:48.732 "base_bdevs_list": [ 00:24:48.732 { 00:24:48.732 "name": "pt1", 00:24:48.732 "uuid": "5e48c1d0-6bfb-5ef0-bb71-fa63679411f1", 00:24:48.732 "is_configured": true, 00:24:48.732 "data_offset": 2048, 00:24:48.732 "data_size": 63488 00:24:48.732 }, 00:24:48.732 { 00:24:48.732 "name": null, 00:24:48.732 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:48.732 "is_configured": false, 00:24:48.732 "data_offset": 2048, 00:24:48.732 "data_size": 63488 00:24:48.732 }, 00:24:48.732 { 00:24:48.732 "name": null, 00:24:48.732 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:48.732 "is_configured": false, 00:24:48.732 "data_offset": 2048, 00:24:48.732 "data_size": 63488 00:24:48.732 } 00:24:48.732 ] 00:24:48.732 }' 00:24:48.732 05:07:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.732 05:07:18 -- common/autotest_common.sh@10 -- # set +x 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:49.668 [2024-04-27 05:07:19.485142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:49.668 [2024-04-27 05:07:19.485290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.668 [2024-04-27 05:07:19.485349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:49.668 [2024-04-27 05:07:19.485386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.668 [2024-04-27 05:07:19.485981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.668 [2024-04-27 05:07:19.486038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:49.668 [2024-04-27 05:07:19.486169] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:49.668 [2024-04-27 05:07:19.486211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:49.668 pt2 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:49.668 05:07:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:49.927 [2024-04-27 05:07:19.725067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:49.927 [2024-04-27 05:07:19.725212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.927 [2024-04-27 05:07:19.725269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:49.927 [2024-04-27 05:07:19.725306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.927 [2024-04-27 05:07:19.725881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.927 [2024-04-27 05:07:19.725937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:49.927 [2024-04-27 05:07:19.726097] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:49.927 [2024-04-27 05:07:19.726137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:49.927 [2024-04-27 05:07:19.726333] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:24:49.927 [2024-04-27 05:07:19.726360] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:49.927 [2024-04-27 05:07:19.726461] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:49.927 [2024-04-27 05:07:19.727232] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:24:49.927 [2024-04-27 05:07:19.727260] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:24:49.927 [2024-04-27 05:07:19.727400] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.927 pt3 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.927 05:07:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.186 05:07:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.186 "name": "raid_bdev1", 00:24:50.186 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:50.186 "strip_size_kb": 64, 00:24:50.186 "state": "online", 00:24:50.186 "raid_level": "raid5f", 00:24:50.186 "superblock": true, 00:24:50.186 "num_base_bdevs": 3, 00:24:50.186 "num_base_bdevs_discovered": 3, 00:24:50.186 "num_base_bdevs_operational": 3, 00:24:50.186 "base_bdevs_list": [ 00:24:50.186 { 00:24:50.186 "name": "pt1", 00:24:50.186 "uuid": "5e48c1d0-6bfb-5ef0-bb71-fa63679411f1", 00:24:50.186 "is_configured": true, 00:24:50.186 "data_offset": 2048, 00:24:50.186 "data_size": 63488 00:24:50.186 }, 00:24:50.186 { 00:24:50.186 "name": "pt2", 00:24:50.186 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:50.186 "is_configured": true, 00:24:50.186 "data_offset": 2048, 00:24:50.186 "data_size": 63488 00:24:50.186 }, 00:24:50.186 { 00:24:50.186 "name": "pt3", 00:24:50.186 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:50.186 "is_configured": true, 00:24:50.186 "data_offset": 2048, 00:24:50.186 "data_size": 63488 00:24:50.186 } 00:24:50.186 ] 00:24:50.186 }' 00:24:50.186 05:07:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.186 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:24:50.754 05:07:20 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:50.754 05:07:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:51.013 [2024-04-27 05:07:20.825918] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.013 05:07:20 -- bdev/bdev_raid.sh@430 -- # '[' 7971b4e1-7709-46ff-adaa-c79f43adb011 '!=' 7971b4e1-7709-46ff-adaa-c79f43adb011 ']' 00:24:51.013 05:07:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:51.013 05:07:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:51.013 05:07:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:51.013 05:07:20 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:51.272 [2024-04-27 05:07:21.101816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.272 05:07:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.531 05:07:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.531 "name": "raid_bdev1", 00:24:51.531 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:51.531 "strip_size_kb": 64, 00:24:51.531 "state": "online", 00:24:51.531 "raid_level": "raid5f", 00:24:51.531 "superblock": true, 00:24:51.531 "num_base_bdevs": 3, 00:24:51.531 "num_base_bdevs_discovered": 2, 00:24:51.531 "num_base_bdevs_operational": 2, 00:24:51.531 "base_bdevs_list": [ 00:24:51.531 { 00:24:51.531 "name": null, 00:24:51.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.531 "is_configured": false, 00:24:51.531 "data_offset": 2048, 00:24:51.531 "data_size": 63488 00:24:51.531 }, 00:24:51.531 { 00:24:51.531 "name": "pt2", 00:24:51.531 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:51.531 "is_configured": true, 00:24:51.531 "data_offset": 2048, 00:24:51.531 "data_size": 63488 00:24:51.531 }, 00:24:51.531 { 00:24:51.531 "name": "pt3", 00:24:51.531 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:51.531 "is_configured": true, 00:24:51.531 "data_offset": 2048, 00:24:51.531 "data_size": 63488 00:24:51.531 } 00:24:51.531 ] 00:24:51.531 }' 00:24:51.531 05:07:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.531 05:07:21 -- common/autotest_common.sh@10 -- # set +x 00:24:52.159 05:07:21 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:52.417 [2024-04-27 05:07:22.254059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.417 [2024-04-27 05:07:22.254123] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.417 [2024-04-27 05:07:22.254230] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.417 [2024-04-27 05:07:22.254322] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:52.417 [2024-04-27 05:07:22.254338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:24:52.417 05:07:22 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.417 05:07:22 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:52.675 05:07:22 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:52.675 05:07:22 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:52.675 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:52.675 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:52.675 05:07:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:52.933 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:52.933 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:52.933 05:07:22 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:53.191 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:53.191 05:07:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:53.191 05:07:22 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:53.191 05:07:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:53.191 05:07:22 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:53.449 [2024-04-27 05:07:23.218280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:53.449 [2024-04-27 05:07:23.218413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.449 [2024-04-27 05:07:23.218467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:53.449 [2024-04-27 05:07:23.218502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.449 [2024-04-27 05:07:23.221423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.449 [2024-04-27 05:07:23.221487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:53.449 [2024-04-27 05:07:23.221629] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:53.449 [2024-04-27 05:07:23.221687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:53.449 pt2 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.449 05:07:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.706 05:07:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.706 "name": "raid_bdev1", 00:24:53.707 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:53.707 "strip_size_kb": 64, 00:24:53.707 "state": "configuring", 00:24:53.707 "raid_level": "raid5f", 00:24:53.707 "superblock": true, 00:24:53.707 "num_base_bdevs": 3, 00:24:53.707 "num_base_bdevs_discovered": 1, 00:24:53.707 "num_base_bdevs_operational": 2, 00:24:53.707 "base_bdevs_list": [ 00:24:53.707 { 00:24:53.707 "name": null, 00:24:53.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.707 "is_configured": false, 00:24:53.707 "data_offset": 2048, 00:24:53.707 "data_size": 63488 00:24:53.707 }, 00:24:53.707 { 00:24:53.707 "name": "pt2", 00:24:53.707 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:53.707 "is_configured": true, 00:24:53.707 "data_offset": 2048, 00:24:53.707 "data_size": 63488 00:24:53.707 }, 00:24:53.707 { 00:24:53.707 "name": null, 00:24:53.707 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:53.707 "is_configured": false, 00:24:53.707 "data_offset": 2048, 00:24:53.707 "data_size": 63488 00:24:53.707 } 00:24:53.707 ] 00:24:53.707 }' 00:24:53.707 05:07:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.707 05:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:54.272 05:07:24 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:54.272 05:07:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:54.272 05:07:24 -- bdev/bdev_raid.sh@462 -- # i=2 00:24:54.272 05:07:24 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:54.530 [2024-04-27 05:07:24.414588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:54.530 [2024-04-27 05:07:24.414725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.530 [2024-04-27 05:07:24.414794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:54.530 [2024-04-27 05:07:24.414830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.530 [2024-04-27 05:07:24.415668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.530 [2024-04-27 05:07:24.415720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:54.530 [2024-04-27 05:07:24.415856] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:54.530 [2024-04-27 05:07:24.415902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:54.530 [2024-04-27 05:07:24.416055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:24:54.530 [2024-04-27 05:07:24.416069] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:54.530 [2024-04-27 05:07:24.416145] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:54.530 [2024-04-27 05:07:24.417013] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:24:54.530 [2024-04-27 05:07:24.417042] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:24:54.530 [2024-04-27 05:07:24.417324] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.530 pt3 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.530 05:07:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.789 05:07:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.789 "name": "raid_bdev1", 00:24:54.789 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:54.789 "strip_size_kb": 64, 00:24:54.789 "state": "online", 00:24:54.789 "raid_level": "raid5f", 00:24:54.789 "superblock": true, 00:24:54.789 "num_base_bdevs": 3, 00:24:54.789 "num_base_bdevs_discovered": 2, 00:24:54.789 "num_base_bdevs_operational": 2, 00:24:54.789 "base_bdevs_list": [ 00:24:54.789 { 00:24:54.789 "name": null, 00:24:54.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.789 "is_configured": false, 00:24:54.789 "data_offset": 2048, 00:24:54.789 "data_size": 63488 00:24:54.789 }, 00:24:54.789 { 00:24:54.789 "name": "pt2", 00:24:54.789 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:54.789 "is_configured": true, 00:24:54.789 "data_offset": 2048, 00:24:54.789 "data_size": 63488 00:24:54.789 }, 00:24:54.789 { 00:24:54.789 "name": "pt3", 00:24:54.789 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:54.789 "is_configured": true, 00:24:54.789 "data_offset": 2048, 00:24:54.789 "data_size": 63488 00:24:54.789 } 00:24:54.789 ] 00:24:54.789 }' 00:24:54.789 05:07:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.789 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:55.723 05:07:25 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:24:55.724 05:07:25 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:55.724 [2024-04-27 05:07:25.543606] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.724 [2024-04-27 05:07:25.543674] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:55.724 [2024-04-27 05:07:25.543776] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.724 [2024-04-27 05:07:25.543860] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:55.724 [2024-04-27 05:07:25.543874] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:24:55.724 05:07:25 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.724 05:07:25 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:55.982 05:07:25 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:55.982 05:07:25 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:55.982 05:07:25 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:56.240 [2024-04-27 05:07:26.067769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:56.240 [2024-04-27 05:07:26.067898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.240 [2024-04-27 05:07:26.067954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:56.240 [2024-04-27 05:07:26.067994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.240 [2024-04-27 05:07:26.070905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.240 [2024-04-27 05:07:26.070968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:56.240 [2024-04-27 05:07:26.071120] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:56.240 [2024-04-27 05:07:26.071179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:56.240 pt1 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.240 05:07:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.498 05:07:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.498 "name": "raid_bdev1", 00:24:56.498 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:56.499 "strip_size_kb": 64, 00:24:56.499 "state": "configuring", 00:24:56.499 "raid_level": "raid5f", 00:24:56.499 "superblock": true, 00:24:56.499 "num_base_bdevs": 3, 00:24:56.499 "num_base_bdevs_discovered": 1, 00:24:56.499 "num_base_bdevs_operational": 3, 00:24:56.499 "base_bdevs_list": [ 00:24:56.499 { 00:24:56.499 "name": "pt1", 00:24:56.499 "uuid": "5e48c1d0-6bfb-5ef0-bb71-fa63679411f1", 00:24:56.499 "is_configured": true, 00:24:56.499 "data_offset": 2048, 00:24:56.499 "data_size": 63488 00:24:56.499 }, 00:24:56.499 { 00:24:56.499 "name": null, 00:24:56.499 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:56.499 "is_configured": false, 00:24:56.499 "data_offset": 2048, 00:24:56.499 "data_size": 63488 00:24:56.499 }, 00:24:56.499 { 00:24:56.499 "name": null, 00:24:56.499 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:56.499 "is_configured": false, 00:24:56.499 "data_offset": 2048, 00:24:56.499 "data_size": 63488 00:24:56.499 } 00:24:56.499 ] 00:24:56.499 }' 00:24:56.499 05:07:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.499 05:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:57.434 05:07:26 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:57.434 05:07:26 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:57.434 05:07:26 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:57.434 05:07:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:57.434 05:07:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:57.434 05:07:27 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:57.693 05:07:27 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:57.693 05:07:27 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:57.693 05:07:27 -- bdev/bdev_raid.sh@489 -- # i=2 00:24:57.693 05:07:27 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:57.952 [2024-04-27 05:07:27.713319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:57.952 [2024-04-27 05:07:27.713456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.952 [2024-04-27 05:07:27.713510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:57.952 [2024-04-27 05:07:27.713545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.952 [2024-04-27 05:07:27.714116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.952 [2024-04-27 05:07:27.714168] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:57.952 [2024-04-27 05:07:27.714294] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:57.952 [2024-04-27 05:07:27.714314] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:57.952 [2024-04-27 05:07:27.714323] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:57.952 [2024-04-27 05:07:27.714365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:24:57.952 [2024-04-27 05:07:27.714435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:57.952 pt3 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.952 05:07:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.210 05:07:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.210 "name": "raid_bdev1", 00:24:58.210 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:58.210 "strip_size_kb": 64, 00:24:58.210 "state": "configuring", 00:24:58.210 "raid_level": "raid5f", 00:24:58.210 "superblock": true, 00:24:58.210 "num_base_bdevs": 3, 00:24:58.210 "num_base_bdevs_discovered": 1, 00:24:58.210 "num_base_bdevs_operational": 2, 00:24:58.210 "base_bdevs_list": [ 00:24:58.210 { 00:24:58.210 "name": null, 00:24:58.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.210 "is_configured": false, 00:24:58.210 "data_offset": 2048, 00:24:58.210 "data_size": 63488 00:24:58.210 }, 00:24:58.210 { 00:24:58.210 "name": null, 00:24:58.210 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:58.210 "is_configured": false, 00:24:58.210 "data_offset": 2048, 00:24:58.210 "data_size": 63488 00:24:58.210 }, 00:24:58.210 { 00:24:58.210 "name": "pt3", 00:24:58.210 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:58.210 "is_configured": true, 00:24:58.210 "data_offset": 2048, 00:24:58.211 "data_size": 63488 00:24:58.211 } 00:24:58.211 ] 00:24:58.211 }' 00:24:58.211 05:07:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.211 05:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.786 05:07:28 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:58.786 05:07:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:58.786 05:07:28 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.070 [2024-04-27 05:07:28.913612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.070 [2024-04-27 05:07:28.913767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.070 [2024-04-27 05:07:28.913817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:59.070 [2024-04-27 05:07:28.913855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.070 [2024-04-27 05:07:28.914430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.070 [2024-04-27 05:07:28.914487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.070 [2024-04-27 05:07:28.914596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:59.070 [2024-04-27 05:07:28.914655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:59.070 [2024-04-27 05:07:28.914803] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:24:59.070 [2024-04-27 05:07:28.914818] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:59.070 [2024-04-27 05:07:28.914919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:59.070 [2024-04-27 05:07:28.915727] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:24:59.070 [2024-04-27 05:07:28.915755] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:24:59.070 [2024-04-27 05:07:28.915950] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.070 pt2 00:24:59.070 05:07:28 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:59.070 05:07:28 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.071 05:07:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.329 05:07:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.329 "name": "raid_bdev1", 00:24:59.329 "uuid": "7971b4e1-7709-46ff-adaa-c79f43adb011", 00:24:59.329 "strip_size_kb": 64, 00:24:59.329 "state": "online", 00:24:59.329 "raid_level": "raid5f", 00:24:59.329 "superblock": true, 00:24:59.329 "num_base_bdevs": 3, 00:24:59.329 "num_base_bdevs_discovered": 2, 00:24:59.329 "num_base_bdevs_operational": 2, 00:24:59.329 "base_bdevs_list": [ 00:24:59.329 { 00:24:59.329 "name": null, 00:24:59.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.329 "is_configured": false, 00:24:59.329 "data_offset": 2048, 00:24:59.329 "data_size": 63488 00:24:59.329 }, 00:24:59.329 { 00:24:59.329 "name": "pt2", 00:24:59.329 "uuid": "ed08899b-63ec-57e9-926a-1a3c11976938", 00:24:59.329 "is_configured": true, 00:24:59.329 "data_offset": 2048, 00:24:59.329 "data_size": 63488 00:24:59.329 }, 00:24:59.329 { 00:24:59.329 "name": "pt3", 00:24:59.329 "uuid": "d163315a-0bec-5136-a0f1-281926db907f", 00:24:59.329 "is_configured": true, 00:24:59.329 "data_offset": 2048, 00:24:59.329 "data_size": 63488 00:24:59.329 } 00:24:59.329 ] 00:24:59.329 }' 00:24:59.329 05:07:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.329 05:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:00.264 05:07:29 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:00.264 05:07:29 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:00.264 [2024-04-27 05:07:30.070433] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.264 05:07:30 -- bdev/bdev_raid.sh@506 -- # '[' 7971b4e1-7709-46ff-adaa-c79f43adb011 '!=' 7971b4e1-7709-46ff-adaa-c79f43adb011 ']' 00:25:00.264 05:07:30 -- bdev/bdev_raid.sh@511 -- # killprocess 140085 00:25:00.264 05:07:30 -- common/autotest_common.sh@926 -- # '[' -z 140085 ']' 00:25:00.264 05:07:30 -- common/autotest_common.sh@930 -- # kill -0 140085 00:25:00.264 05:07:30 -- common/autotest_common.sh@931 -- # uname 00:25:00.264 05:07:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:00.264 05:07:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140085 00:25:00.264 killing process with pid 140085 00:25:00.264 05:07:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:00.264 05:07:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:00.264 05:07:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140085' 00:25:00.264 05:07:30 -- common/autotest_common.sh@945 -- # kill 140085 00:25:00.264 05:07:30 -- common/autotest_common.sh@950 -- # wait 140085 00:25:00.264 [2024-04-27 05:07:30.116137] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.264 [2024-04-27 05:07:30.116268] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.264 [2024-04-27 05:07:30.116350] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.264 [2024-04-27 05:07:30.116363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:00.522 [2024-04-27 05:07:30.180148] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:00.781 00:25:00.781 real 0m19.938s 00:25:00.781 user 0m37.225s 00:25:00.781 sys 0m2.570s 00:25:00.781 05:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.781 ************************************ 00:25:00.781 END TEST raid5f_superblock_test 00:25:00.781 ************************************ 00:25:00.781 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:25:00.781 05:07:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:00.781 05:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:00.781 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:00.781 ************************************ 00:25:00.781 START TEST raid5f_rebuild_test 00:25:00.781 ************************************ 00:25:00.781 05:07:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=140698 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140698 /var/tmp/spdk-raid.sock 00:25:00.781 05:07:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:00.781 05:07:30 -- common/autotest_common.sh@819 -- # '[' -z 140698 ']' 00:25:00.781 05:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:00.781 05:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:00.781 05:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:00.781 05:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:00.781 05:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:00.781 [2024-04-27 05:07:30.638616] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:00.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:00.781 Zero copy mechanism will not be used. 00:25:00.781 [2024-04-27 05:07:30.638844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140698 ] 00:25:01.040 [2024-04-27 05:07:30.802104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.040 [2024-04-27 05:07:30.926725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.298 [2024-04-27 05:07:31.006452] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.871 05:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:01.871 05:07:31 -- common/autotest_common.sh@852 -- # return 0 00:25:01.871 05:07:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:01.871 05:07:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:01.871 05:07:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:02.129 BaseBdev1 00:25:02.129 05:07:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:02.129 05:07:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:02.129 05:07:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:02.387 BaseBdev2 00:25:02.387 05:07:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:02.387 05:07:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:02.387 05:07:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:02.644 BaseBdev3 00:25:02.644 05:07:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:02.901 spare_malloc 00:25:02.901 05:07:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:03.159 spare_delay 00:25:03.159 05:07:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:03.416 [2024-04-27 05:07:33.198927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:03.416 [2024-04-27 05:07:33.199107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.416 [2024-04-27 05:07:33.199180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:03.416 [2024-04-27 05:07:33.199242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.416 [2024-04-27 05:07:33.202461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.416 [2024-04-27 05:07:33.202548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:03.416 spare 00:25:03.416 05:07:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:25:03.674 [2024-04-27 05:07:33.435194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.674 [2024-04-27 05:07:33.437771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.674 [2024-04-27 05:07:33.437844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:03.674 [2024-04-27 05:07:33.437966] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:25:03.674 [2024-04-27 05:07:33.437981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:03.674 [2024-04-27 05:07:33.438199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:03.674 [2024-04-27 05:07:33.439199] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:25:03.674 [2024-04-27 05:07:33.439227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:25:03.674 [2024-04-27 05:07:33.439557] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.674 05:07:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.932 05:07:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.932 "name": "raid_bdev1", 00:25:03.932 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:03.932 "strip_size_kb": 64, 00:25:03.932 "state": "online", 00:25:03.932 "raid_level": "raid5f", 00:25:03.932 "superblock": false, 00:25:03.932 "num_base_bdevs": 3, 00:25:03.932 "num_base_bdevs_discovered": 3, 00:25:03.932 "num_base_bdevs_operational": 3, 00:25:03.932 "base_bdevs_list": [ 00:25:03.932 { 00:25:03.932 "name": "BaseBdev1", 00:25:03.932 "uuid": "79995635-474c-4776-8e2a-5b38cf46782e", 00:25:03.932 "is_configured": true, 00:25:03.932 "data_offset": 0, 00:25:03.932 "data_size": 65536 00:25:03.932 }, 00:25:03.932 { 00:25:03.932 "name": "BaseBdev2", 00:25:03.932 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:03.932 "is_configured": true, 00:25:03.932 "data_offset": 0, 00:25:03.932 "data_size": 65536 00:25:03.932 }, 00:25:03.932 { 00:25:03.932 "name": "BaseBdev3", 00:25:03.932 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:03.932 "is_configured": true, 00:25:03.932 "data_offset": 0, 00:25:03.932 "data_size": 65536 00:25:03.932 } 00:25:03.932 ] 00:25:03.932 }' 00:25:03.932 05:07:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.932 05:07:33 -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 05:07:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:04.498 05:07:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:04.756 [2024-04-27 05:07:34.640020] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.756 05:07:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:25:04.756 05:07:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.756 05:07:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:05.327 05:07:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:05.327 05:07:34 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:05.327 05:07:34 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:05.327 05:07:34 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@12 -- # local i 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:05.327 05:07:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:05.327 [2024-04-27 05:07:35.204085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:05.327 /dev/nbd0 00:25:05.594 05:07:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:05.594 05:07:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:05.594 05:07:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:05.594 05:07:35 -- common/autotest_common.sh@857 -- # local i 00:25:05.594 05:07:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:05.594 05:07:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:05.594 05:07:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:05.594 05:07:35 -- common/autotest_common.sh@861 -- # break 00:25:05.594 05:07:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:05.594 05:07:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:05.594 05:07:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:05.594 1+0 records in 00:25:05.594 1+0 records out 00:25:05.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588144 s, 7.0 MB/s 00:25:05.594 05:07:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:05.594 05:07:35 -- common/autotest_common.sh@874 -- # size=4096 00:25:05.594 05:07:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:05.594 05:07:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:05.594 05:07:35 -- common/autotest_common.sh@877 -- # return 0 00:25:05.594 05:07:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:05.594 05:07:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:05.594 05:07:35 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:05.594 05:07:35 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:25:05.594 05:07:35 -- bdev/bdev_raid.sh@582 -- # echo 128 00:25:05.594 05:07:35 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:25:05.852 512+0 records in 00:25:05.852 512+0 records out 00:25:05.852 67108864 bytes (67 MB, 64 MiB) copied, 0.405956 s, 165 MB/s 00:25:05.852 05:07:35 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@51 -- # local i 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:05.852 05:07:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:06.110 [2024-04-27 05:07:35.963979] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.110 05:07:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:06.110 05:07:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:06.110 05:07:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:06.110 05:07:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:06.110 05:07:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:06.111 05:07:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:06.111 05:07:35 -- bdev/nbd_common.sh@41 -- # break 00:25:06.111 05:07:35 -- bdev/nbd_common.sh@45 -- # return 0 00:25:06.111 05:07:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:06.369 [2024-04-27 05:07:36.195529] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.369 05:07:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.627 05:07:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.627 "name": "raid_bdev1", 00:25:06.627 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:06.627 "strip_size_kb": 64, 00:25:06.627 "state": "online", 00:25:06.627 "raid_level": "raid5f", 00:25:06.627 "superblock": false, 00:25:06.627 "num_base_bdevs": 3, 00:25:06.627 "num_base_bdevs_discovered": 2, 00:25:06.627 "num_base_bdevs_operational": 2, 00:25:06.627 "base_bdevs_list": [ 00:25:06.627 { 00:25:06.627 "name": null, 00:25:06.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.627 "is_configured": false, 00:25:06.627 "data_offset": 0, 00:25:06.627 "data_size": 65536 00:25:06.627 }, 00:25:06.627 { 00:25:06.627 "name": "BaseBdev2", 00:25:06.627 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:06.627 "is_configured": true, 00:25:06.627 "data_offset": 0, 00:25:06.627 "data_size": 65536 00:25:06.627 }, 00:25:06.627 { 00:25:06.627 "name": "BaseBdev3", 00:25:06.627 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:06.627 "is_configured": true, 00:25:06.627 "data_offset": 0, 00:25:06.627 "data_size": 65536 00:25:06.627 } 00:25:06.627 ] 00:25:06.627 }' 00:25:06.627 05:07:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.627 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:25:07.559 05:07:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:07.559 [2024-04-27 05:07:37.395816] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:07.559 [2024-04-27 05:07:37.395907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:07.560 [2024-04-27 05:07:37.402520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:25:07.560 [2024-04-27 05:07:37.405562] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:07.560 05:07:37 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.936 "name": "raid_bdev1", 00:25:08.936 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:08.936 "strip_size_kb": 64, 00:25:08.936 "state": "online", 00:25:08.936 "raid_level": "raid5f", 00:25:08.936 "superblock": false, 00:25:08.936 "num_base_bdevs": 3, 00:25:08.936 "num_base_bdevs_discovered": 3, 00:25:08.936 "num_base_bdevs_operational": 3, 00:25:08.936 "process": { 00:25:08.936 "type": "rebuild", 00:25:08.936 "target": "spare", 00:25:08.936 "progress": { 00:25:08.936 "blocks": 24576, 00:25:08.936 "percent": 18 00:25:08.936 } 00:25:08.936 }, 00:25:08.936 "base_bdevs_list": [ 00:25:08.936 { 00:25:08.936 "name": "spare", 00:25:08.936 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:08.936 "is_configured": true, 00:25:08.936 "data_offset": 0, 00:25:08.936 "data_size": 65536 00:25:08.936 }, 00:25:08.936 { 00:25:08.936 "name": "BaseBdev2", 00:25:08.936 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:08.936 "is_configured": true, 00:25:08.936 "data_offset": 0, 00:25:08.936 "data_size": 65536 00:25:08.936 }, 00:25:08.936 { 00:25:08.936 "name": "BaseBdev3", 00:25:08.936 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:08.936 "is_configured": true, 00:25:08.936 "data_offset": 0, 00:25:08.936 "data_size": 65536 00:25:08.936 } 00:25:08.936 ] 00:25:08.936 }' 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.936 05:07:38 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:09.195 [2024-04-27 05:07:39.003444] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:09.195 [2024-04-27 05:07:39.026303] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:09.195 [2024-04-27 05:07:39.026474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.195 05:07:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.454 05:07:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.454 "name": "raid_bdev1", 00:25:09.454 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:09.454 "strip_size_kb": 64, 00:25:09.454 "state": "online", 00:25:09.454 "raid_level": "raid5f", 00:25:09.454 "superblock": false, 00:25:09.454 "num_base_bdevs": 3, 00:25:09.454 "num_base_bdevs_discovered": 2, 00:25:09.454 "num_base_bdevs_operational": 2, 00:25:09.454 "base_bdevs_list": [ 00:25:09.454 { 00:25:09.454 "name": null, 00:25:09.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.454 "is_configured": false, 00:25:09.454 "data_offset": 0, 00:25:09.454 "data_size": 65536 00:25:09.454 }, 00:25:09.454 { 00:25:09.454 "name": "BaseBdev2", 00:25:09.454 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:09.454 "is_configured": true, 00:25:09.454 "data_offset": 0, 00:25:09.454 "data_size": 65536 00:25:09.454 }, 00:25:09.454 { 00:25:09.454 "name": "BaseBdev3", 00:25:09.454 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:09.454 "is_configured": true, 00:25:09.454 "data_offset": 0, 00:25:09.454 "data_size": 65536 00:25:09.454 } 00:25:09.454 ] 00:25:09.454 }' 00:25:09.454 05:07:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.455 05:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.390 05:07:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.390 05:07:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.390 "name": "raid_bdev1", 00:25:10.390 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:10.390 "strip_size_kb": 64, 00:25:10.390 "state": "online", 00:25:10.390 "raid_level": "raid5f", 00:25:10.390 "superblock": false, 00:25:10.390 "num_base_bdevs": 3, 00:25:10.390 "num_base_bdevs_discovered": 2, 00:25:10.390 "num_base_bdevs_operational": 2, 00:25:10.390 "base_bdevs_list": [ 00:25:10.390 { 00:25:10.390 "name": null, 00:25:10.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.390 "is_configured": false, 00:25:10.390 "data_offset": 0, 00:25:10.390 "data_size": 65536 00:25:10.390 }, 00:25:10.390 { 00:25:10.390 "name": "BaseBdev2", 00:25:10.390 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:10.390 "is_configured": true, 00:25:10.390 "data_offset": 0, 00:25:10.390 "data_size": 65536 00:25:10.390 }, 00:25:10.390 { 00:25:10.390 "name": "BaseBdev3", 00:25:10.390 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:10.390 "is_configured": true, 00:25:10.390 "data_offset": 0, 00:25:10.390 "data_size": 65536 00:25:10.390 } 00:25:10.390 ] 00:25:10.390 }' 00:25:10.390 05:07:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.390 05:07:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:10.390 05:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.649 05:07:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:10.649 05:07:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.908 [2024-04-27 05:07:40.592356] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:10.908 [2024-04-27 05:07:40.592439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.908 [2024-04-27 05:07:40.599210] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:25:10.908 [2024-04-27 05:07:40.602099] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:10.908 05:07:40 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.843 05:07:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:12.102 "name": "raid_bdev1", 00:25:12.102 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:12.102 "strip_size_kb": 64, 00:25:12.102 "state": "online", 00:25:12.102 "raid_level": "raid5f", 00:25:12.102 "superblock": false, 00:25:12.102 "num_base_bdevs": 3, 00:25:12.102 "num_base_bdevs_discovered": 3, 00:25:12.102 "num_base_bdevs_operational": 3, 00:25:12.102 "process": { 00:25:12.102 "type": "rebuild", 00:25:12.102 "target": "spare", 00:25:12.102 "progress": { 00:25:12.102 "blocks": 24576, 00:25:12.102 "percent": 18 00:25:12.102 } 00:25:12.102 }, 00:25:12.102 "base_bdevs_list": [ 00:25:12.102 { 00:25:12.102 "name": "spare", 00:25:12.102 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:12.102 "is_configured": true, 00:25:12.102 "data_offset": 0, 00:25:12.102 "data_size": 65536 00:25:12.102 }, 00:25:12.102 { 00:25:12.102 "name": "BaseBdev2", 00:25:12.102 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:12.102 "is_configured": true, 00:25:12.102 "data_offset": 0, 00:25:12.102 "data_size": 65536 00:25:12.102 }, 00:25:12.102 { 00:25:12.102 "name": "BaseBdev3", 00:25:12.102 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:12.102 "is_configured": true, 00:25:12.102 "data_offset": 0, 00:25:12.102 "data_size": 65536 00:25:12.102 } 00:25:12.102 ] 00:25:12.102 }' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@657 -- # local timeout=627 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.102 05:07:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.361 05:07:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:12.362 "name": "raid_bdev1", 00:25:12.362 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:12.362 "strip_size_kb": 64, 00:25:12.362 "state": "online", 00:25:12.362 "raid_level": "raid5f", 00:25:12.362 "superblock": false, 00:25:12.362 "num_base_bdevs": 3, 00:25:12.362 "num_base_bdevs_discovered": 3, 00:25:12.362 "num_base_bdevs_operational": 3, 00:25:12.362 "process": { 00:25:12.362 "type": "rebuild", 00:25:12.362 "target": "spare", 00:25:12.362 "progress": { 00:25:12.362 "blocks": 30720, 00:25:12.362 "percent": 23 00:25:12.362 } 00:25:12.362 }, 00:25:12.362 "base_bdevs_list": [ 00:25:12.362 { 00:25:12.362 "name": "spare", 00:25:12.362 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:12.362 "is_configured": true, 00:25:12.362 "data_offset": 0, 00:25:12.362 "data_size": 65536 00:25:12.362 }, 00:25:12.362 { 00:25:12.362 "name": "BaseBdev2", 00:25:12.362 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:12.362 "is_configured": true, 00:25:12.362 "data_offset": 0, 00:25:12.362 "data_size": 65536 00:25:12.362 }, 00:25:12.362 { 00:25:12.362 "name": "BaseBdev3", 00:25:12.362 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:12.362 "is_configured": true, 00:25:12.362 "data_offset": 0, 00:25:12.362 "data_size": 65536 00:25:12.362 } 00:25:12.362 ] 00:25:12.362 }' 00:25:12.362 05:07:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:12.362 05:07:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.362 05:07:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:12.620 05:07:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.620 05:07:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.567 05:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.825 "name": "raid_bdev1", 00:25:13.825 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:13.825 "strip_size_kb": 64, 00:25:13.825 "state": "online", 00:25:13.825 "raid_level": "raid5f", 00:25:13.825 "superblock": false, 00:25:13.825 "num_base_bdevs": 3, 00:25:13.825 "num_base_bdevs_discovered": 3, 00:25:13.825 "num_base_bdevs_operational": 3, 00:25:13.825 "process": { 00:25:13.825 "type": "rebuild", 00:25:13.825 "target": "spare", 00:25:13.825 "progress": { 00:25:13.825 "blocks": 59392, 00:25:13.825 "percent": 45 00:25:13.825 } 00:25:13.825 }, 00:25:13.825 "base_bdevs_list": [ 00:25:13.825 { 00:25:13.825 "name": "spare", 00:25:13.825 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:13.825 "is_configured": true, 00:25:13.825 "data_offset": 0, 00:25:13.825 "data_size": 65536 00:25:13.825 }, 00:25:13.825 { 00:25:13.825 "name": "BaseBdev2", 00:25:13.825 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:13.825 "is_configured": true, 00:25:13.825 "data_offset": 0, 00:25:13.825 "data_size": 65536 00:25:13.825 }, 00:25:13.825 { 00:25:13.825 "name": "BaseBdev3", 00:25:13.825 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:13.825 "is_configured": true, 00:25:13.825 "data_offset": 0, 00:25:13.825 "data_size": 65536 00:25:13.825 } 00:25:13.825 ] 00:25:13.825 }' 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.825 05:07:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.759 05:07:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.018 05:07:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.018 "name": "raid_bdev1", 00:25:15.018 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:15.018 "strip_size_kb": 64, 00:25:15.018 "state": "online", 00:25:15.018 "raid_level": "raid5f", 00:25:15.018 "superblock": false, 00:25:15.018 "num_base_bdevs": 3, 00:25:15.018 "num_base_bdevs_discovered": 3, 00:25:15.018 "num_base_bdevs_operational": 3, 00:25:15.018 "process": { 00:25:15.018 "type": "rebuild", 00:25:15.018 "target": "spare", 00:25:15.018 "progress": { 00:25:15.018 "blocks": 86016, 00:25:15.018 "percent": 65 00:25:15.018 } 00:25:15.018 }, 00:25:15.019 "base_bdevs_list": [ 00:25:15.019 { 00:25:15.019 "name": "spare", 00:25:15.019 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:15.019 "is_configured": true, 00:25:15.019 "data_offset": 0, 00:25:15.019 "data_size": 65536 00:25:15.019 }, 00:25:15.019 { 00:25:15.019 "name": "BaseBdev2", 00:25:15.019 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:15.019 "is_configured": true, 00:25:15.019 "data_offset": 0, 00:25:15.019 "data_size": 65536 00:25:15.019 }, 00:25:15.019 { 00:25:15.019 "name": "BaseBdev3", 00:25:15.019 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:15.019 "is_configured": true, 00:25:15.019 "data_offset": 0, 00:25:15.019 "data_size": 65536 00:25:15.019 } 00:25:15.019 ] 00:25:15.019 }' 00:25:15.019 05:07:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.277 05:07:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.277 05:07:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.277 05:07:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.277 05:07:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.213 05:07:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.213 05:07:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.213 05:07:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.472 "name": "raid_bdev1", 00:25:16.472 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:16.472 "strip_size_kb": 64, 00:25:16.472 "state": "online", 00:25:16.472 "raid_level": "raid5f", 00:25:16.472 "superblock": false, 00:25:16.472 "num_base_bdevs": 3, 00:25:16.472 "num_base_bdevs_discovered": 3, 00:25:16.472 "num_base_bdevs_operational": 3, 00:25:16.472 "process": { 00:25:16.472 "type": "rebuild", 00:25:16.472 "target": "spare", 00:25:16.472 "progress": { 00:25:16.472 "blocks": 112640, 00:25:16.472 "percent": 85 00:25:16.472 } 00:25:16.472 }, 00:25:16.472 "base_bdevs_list": [ 00:25:16.472 { 00:25:16.472 "name": "spare", 00:25:16.472 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:16.472 "is_configured": true, 00:25:16.472 "data_offset": 0, 00:25:16.472 "data_size": 65536 00:25:16.472 }, 00:25:16.472 { 00:25:16.472 "name": "BaseBdev2", 00:25:16.472 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:16.472 "is_configured": true, 00:25:16.472 "data_offset": 0, 00:25:16.472 "data_size": 65536 00:25:16.472 }, 00:25:16.472 { 00:25:16.472 "name": "BaseBdev3", 00:25:16.472 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:16.472 "is_configured": true, 00:25:16.472 "data_offset": 0, 00:25:16.472 "data_size": 65536 00:25:16.472 } 00:25:16.472 ] 00:25:16.472 }' 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.472 05:07:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:17.408 [2024-04-27 05:07:47.081171] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:17.408 [2024-04-27 05:07:47.081300] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:17.408 [2024-04-27 05:07:47.081416] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.667 05:07:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.925 05:07:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:17.925 "name": "raid_bdev1", 00:25:17.925 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:17.925 "strip_size_kb": 64, 00:25:17.925 "state": "online", 00:25:17.925 "raid_level": "raid5f", 00:25:17.925 "superblock": false, 00:25:17.925 "num_base_bdevs": 3, 00:25:17.926 "num_base_bdevs_discovered": 3, 00:25:17.926 "num_base_bdevs_operational": 3, 00:25:17.926 "base_bdevs_list": [ 00:25:17.926 { 00:25:17.926 "name": "spare", 00:25:17.926 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:17.926 "is_configured": true, 00:25:17.926 "data_offset": 0, 00:25:17.926 "data_size": 65536 00:25:17.926 }, 00:25:17.926 { 00:25:17.926 "name": "BaseBdev2", 00:25:17.926 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:17.926 "is_configured": true, 00:25:17.926 "data_offset": 0, 00:25:17.926 "data_size": 65536 00:25:17.926 }, 00:25:17.926 { 00:25:17.926 "name": "BaseBdev3", 00:25:17.926 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:17.926 "is_configured": true, 00:25:17.926 "data_offset": 0, 00:25:17.926 "data_size": 65536 00:25:17.926 } 00:25:17.926 ] 00:25:17.926 }' 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@660 -- # break 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.926 05:07:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.185 05:07:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.185 "name": "raid_bdev1", 00:25:18.185 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:18.185 "strip_size_kb": 64, 00:25:18.185 "state": "online", 00:25:18.185 "raid_level": "raid5f", 00:25:18.185 "superblock": false, 00:25:18.185 "num_base_bdevs": 3, 00:25:18.185 "num_base_bdevs_discovered": 3, 00:25:18.185 "num_base_bdevs_operational": 3, 00:25:18.185 "base_bdevs_list": [ 00:25:18.185 { 00:25:18.185 "name": "spare", 00:25:18.185 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:18.185 "is_configured": true, 00:25:18.185 "data_offset": 0, 00:25:18.185 "data_size": 65536 00:25:18.185 }, 00:25:18.185 { 00:25:18.185 "name": "BaseBdev2", 00:25:18.185 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:18.185 "is_configured": true, 00:25:18.185 "data_offset": 0, 00:25:18.185 "data_size": 65536 00:25:18.185 }, 00:25:18.185 { 00:25:18.185 "name": "BaseBdev3", 00:25:18.185 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:18.185 "is_configured": true, 00:25:18.185 "data_offset": 0, 00:25:18.185 "data_size": 65536 00:25:18.185 } 00:25:18.185 ] 00:25:18.185 }' 00:25:18.185 05:07:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.185 05:07:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:18.185 05:07:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.443 05:07:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.702 05:07:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.702 "name": "raid_bdev1", 00:25:18.702 "uuid": "6599c9af-40c2-444f-a431-1d8ba360c37a", 00:25:18.702 "strip_size_kb": 64, 00:25:18.702 "state": "online", 00:25:18.702 "raid_level": "raid5f", 00:25:18.702 "superblock": false, 00:25:18.702 "num_base_bdevs": 3, 00:25:18.702 "num_base_bdevs_discovered": 3, 00:25:18.702 "num_base_bdevs_operational": 3, 00:25:18.702 "base_bdevs_list": [ 00:25:18.702 { 00:25:18.702 "name": "spare", 00:25:18.702 "uuid": "e462a704-02cb-521c-85c7-31ab1c9772ff", 00:25:18.702 "is_configured": true, 00:25:18.702 "data_offset": 0, 00:25:18.702 "data_size": 65536 00:25:18.702 }, 00:25:18.702 { 00:25:18.702 "name": "BaseBdev2", 00:25:18.702 "uuid": "c4ea1ae0-48e6-4764-a1ee-fbf035405cb0", 00:25:18.702 "is_configured": true, 00:25:18.702 "data_offset": 0, 00:25:18.702 "data_size": 65536 00:25:18.702 }, 00:25:18.702 { 00:25:18.702 "name": "BaseBdev3", 00:25:18.702 "uuid": "a73b43b0-e7bb-429c-87b9-dac660213779", 00:25:18.702 "is_configured": true, 00:25:18.702 "data_offset": 0, 00:25:18.702 "data_size": 65536 00:25:18.702 } 00:25:18.702 ] 00:25:18.702 }' 00:25:18.702 05:07:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.702 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:19.269 05:07:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:19.527 [2024-04-27 05:07:49.271230] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:19.527 [2024-04-27 05:07:49.271294] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:19.527 [2024-04-27 05:07:49.271456] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.527 [2024-04-27 05:07:49.271565] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:19.527 [2024-04-27 05:07:49.271580] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:25:19.527 05:07:49 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.527 05:07:49 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:19.787 05:07:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:19.787 05:07:49 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:19.787 05:07:49 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@12 -- # local i 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:19.787 05:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:20.059 /dev/nbd0 00:25:20.059 05:07:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:20.059 05:07:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:20.059 05:07:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:20.059 05:07:49 -- common/autotest_common.sh@857 -- # local i 00:25:20.059 05:07:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:20.059 05:07:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:20.059 05:07:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:20.059 05:07:49 -- common/autotest_common.sh@861 -- # break 00:25:20.059 05:07:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:20.059 05:07:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:20.059 05:07:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:20.059 1+0 records in 00:25:20.059 1+0 records out 00:25:20.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039499 s, 10.4 MB/s 00:25:20.059 05:07:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.059 05:07:49 -- common/autotest_common.sh@874 -- # size=4096 00:25:20.059 05:07:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.059 05:07:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:20.059 05:07:49 -- common/autotest_common.sh@877 -- # return 0 00:25:20.059 05:07:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:20.059 05:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:20.059 05:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:20.329 /dev/nbd1 00:25:20.329 05:07:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:20.329 05:07:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:20.329 05:07:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:20.329 05:07:50 -- common/autotest_common.sh@857 -- # local i 00:25:20.329 05:07:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:20.329 05:07:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:20.329 05:07:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:20.329 05:07:50 -- common/autotest_common.sh@861 -- # break 00:25:20.329 05:07:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:20.329 05:07:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:20.329 05:07:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:20.329 1+0 records in 00:25:20.329 1+0 records out 00:25:20.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052227 s, 7.8 MB/s 00:25:20.329 05:07:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.329 05:07:50 -- common/autotest_common.sh@874 -- # size=4096 00:25:20.329 05:07:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:20.329 05:07:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:20.329 05:07:50 -- common/autotest_common.sh@877 -- # return 0 00:25:20.329 05:07:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:20.329 05:07:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:20.329 05:07:50 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:20.587 05:07:50 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@51 -- # local i 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:20.587 05:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@41 -- # break 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@45 -- # return 0 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:20.846 05:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@41 -- # break 00:25:21.104 05:07:50 -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.104 05:07:50 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:21.104 05:07:50 -- bdev/bdev_raid.sh@709 -- # killprocess 140698 00:25:21.104 05:07:50 -- common/autotest_common.sh@926 -- # '[' -z 140698 ']' 00:25:21.104 05:07:50 -- common/autotest_common.sh@930 -- # kill -0 140698 00:25:21.104 05:07:50 -- common/autotest_common.sh@931 -- # uname 00:25:21.104 05:07:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:21.104 05:07:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140698 00:25:21.104 05:07:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:21.104 05:07:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:21.104 05:07:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140698' 00:25:21.104 killing process with pid 140698 00:25:21.104 05:07:50 -- common/autotest_common.sh@945 -- # kill 140698 00:25:21.104 Received shutdown signal, test time was about 60.000000 seconds 00:25:21.104 00:25:21.104 Latency(us) 00:25:21.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.104 =================================================================================================================== 00:25:21.104 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:21.104 [2024-04-27 05:07:50.817731] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:21.104 05:07:50 -- common/autotest_common.sh@950 -- # wait 140698 00:25:21.104 [2024-04-27 05:07:50.889664] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:21.362 05:07:51 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:21.362 00:25:21.362 real 0m20.690s 00:25:21.362 user 0m31.884s 00:25:21.362 sys 0m2.689s 00:25:21.362 05:07:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.362 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 ************************************ 00:25:21.362 END TEST raid5f_rebuild_test 00:25:21.362 ************************************ 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:25:21.620 05:07:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:21.620 05:07:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.620 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.620 ************************************ 00:25:21.620 START TEST raid5f_rebuild_test_sb 00:25:21.620 ************************************ 00:25:21.620 05:07:51 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@544 -- # raid_pid=141237 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141237 /var/tmp/spdk-raid.sock 00:25:21.620 05:07:51 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:21.620 05:07:51 -- common/autotest_common.sh@819 -- # '[' -z 141237 ']' 00:25:21.620 05:07:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:21.620 05:07:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:21.621 05:07:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:21.621 05:07:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.621 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:21.621 Zero copy mechanism will not be used. 00:25:21.621 [2024-04-27 05:07:51.382999] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:21.621 [2024-04-27 05:07:51.383217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141237 ] 00:25:21.879 [2024-04-27 05:07:51.543489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.879 [2024-04-27 05:07:51.666803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.879 [2024-04-27 05:07:51.749701] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.445 05:07:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.445 05:07:52 -- common/autotest_common.sh@852 -- # return 0 00:25:22.445 05:07:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:22.445 05:07:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:22.445 05:07:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:22.704 BaseBdev1_malloc 00:25:22.962 05:07:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.219 [2024-04-27 05:07:52.901625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.219 [2024-04-27 05:07:52.901765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.219 [2024-04-27 05:07:52.901825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:23.219 [2024-04-27 05:07:52.901894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.219 [2024-04-27 05:07:52.904967] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.219 [2024-04-27 05:07:52.905031] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.219 BaseBdev1 00:25:23.219 05:07:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:23.219 05:07:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:23.219 05:07:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:23.476 BaseBdev2_malloc 00:25:23.476 05:07:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:23.476 [2024-04-27 05:07:53.388910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:23.476 [2024-04-27 05:07:53.389035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.476 [2024-04-27 05:07:53.389094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:23.476 [2024-04-27 05:07:53.389158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.734 [2024-04-27 05:07:53.392018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.734 [2024-04-27 05:07:53.392078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:23.734 BaseBdev2 00:25:23.734 05:07:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:23.734 05:07:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:23.734 05:07:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:23.992 BaseBdev3_malloc 00:25:23.992 05:07:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:24.250 [2024-04-27 05:07:54.033476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:24.250 [2024-04-27 05:07:54.033604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.250 [2024-04-27 05:07:54.033661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:24.250 [2024-04-27 05:07:54.033717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.250 [2024-04-27 05:07:54.036651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.250 [2024-04-27 05:07:54.036716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:24.250 BaseBdev3 00:25:24.250 05:07:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.508 spare_malloc 00:25:24.508 05:07:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.766 spare_delay 00:25:24.766 05:07:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:25.024 [2024-04-27 05:07:54.793399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:25.024 [2024-04-27 05:07:54.793547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.024 [2024-04-27 05:07:54.793604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:25.024 [2024-04-27 05:07:54.793659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.025 [2024-04-27 05:07:54.796659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.025 [2024-04-27 05:07:54.796726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:25.025 spare 00:25:25.025 05:07:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:25:25.283 [2024-04-27 05:07:55.033774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.283 [2024-04-27 05:07:55.036341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.283 [2024-04-27 05:07:55.036431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:25.283 [2024-04-27 05:07:55.036724] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:25:25.283 [2024-04-27 05:07:55.036743] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:25.283 [2024-04-27 05:07:55.036953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:25.283 [2024-04-27 05:07:55.037857] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:25:25.283 [2024-04-27 05:07:55.037887] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:25:25.283 [2024-04-27 05:07:55.038107] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.283 05:07:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.542 05:07:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:25.542 "name": "raid_bdev1", 00:25:25.542 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:25.542 "strip_size_kb": 64, 00:25:25.542 "state": "online", 00:25:25.542 "raid_level": "raid5f", 00:25:25.542 "superblock": true, 00:25:25.542 "num_base_bdevs": 3, 00:25:25.542 "num_base_bdevs_discovered": 3, 00:25:25.542 "num_base_bdevs_operational": 3, 00:25:25.542 "base_bdevs_list": [ 00:25:25.542 { 00:25:25.542 "name": "BaseBdev1", 00:25:25.542 "uuid": "e85111c5-6732-58bf-a997-e8cc5ae6dc69", 00:25:25.542 "is_configured": true, 00:25:25.542 "data_offset": 2048, 00:25:25.542 "data_size": 63488 00:25:25.542 }, 00:25:25.542 { 00:25:25.542 "name": "BaseBdev2", 00:25:25.542 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:25.542 "is_configured": true, 00:25:25.542 "data_offset": 2048, 00:25:25.542 "data_size": 63488 00:25:25.542 }, 00:25:25.542 { 00:25:25.542 "name": "BaseBdev3", 00:25:25.542 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:25.542 "is_configured": true, 00:25:25.542 "data_offset": 2048, 00:25:25.542 "data_size": 63488 00:25:25.542 } 00:25:25.542 ] 00:25:25.542 }' 00:25:25.542 05:07:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:25.542 05:07:55 -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 05:07:55 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:26.109 05:07:55 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:26.367 [2024-04-27 05:07:56.158589] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:26.367 05:07:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:25:26.367 05:07:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.367 05:07:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.626 05:07:56 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:26.626 05:07:56 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:26.626 05:07:56 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:26.626 05:07:56 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@12 -- # local i 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.626 05:07:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:26.885 [2024-04-27 05:07:56.662619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:26.885 /dev/nbd0 00:25:26.885 05:07:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:26.885 05:07:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:26.885 05:07:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:26.885 05:07:56 -- common/autotest_common.sh@857 -- # local i 00:25:26.885 05:07:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:26.885 05:07:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:26.885 05:07:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:26.885 05:07:56 -- common/autotest_common.sh@861 -- # break 00:25:26.885 05:07:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:26.885 05:07:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:26.885 05:07:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:26.885 1+0 records in 00:25:26.885 1+0 records out 00:25:26.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383502 s, 10.7 MB/s 00:25:26.885 05:07:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.885 05:07:56 -- common/autotest_common.sh@874 -- # size=4096 00:25:26.885 05:07:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:26.885 05:07:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:26.885 05:07:56 -- common/autotest_common.sh@877 -- # return 0 00:25:26.885 05:07:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:26.885 05:07:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:26.885 05:07:56 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:26.885 05:07:56 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:25:26.885 05:07:56 -- bdev/bdev_raid.sh@582 -- # echo 128 00:25:26.885 05:07:56 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:25:27.453 496+0 records in 00:25:27.453 496+0 records out 00:25:27.453 65011712 bytes (65 MB, 62 MiB) copied, 0.409931 s, 159 MB/s 00:25:27.453 05:07:57 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@51 -- # local i 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:27.453 05:07:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:27.711 [2024-04-27 05:07:57.388580] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@41 -- # break 00:25:27.711 05:07:57 -- bdev/nbd_common.sh@45 -- # return 0 00:25:27.711 05:07:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:27.711 [2024-04-27 05:07:57.616298] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.006 05:07:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.264 05:07:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.265 "name": "raid_bdev1", 00:25:28.265 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:28.265 "strip_size_kb": 64, 00:25:28.265 "state": "online", 00:25:28.265 "raid_level": "raid5f", 00:25:28.265 "superblock": true, 00:25:28.265 "num_base_bdevs": 3, 00:25:28.265 "num_base_bdevs_discovered": 2, 00:25:28.265 "num_base_bdevs_operational": 2, 00:25:28.265 "base_bdevs_list": [ 00:25:28.265 { 00:25:28.265 "name": null, 00:25:28.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.265 "is_configured": false, 00:25:28.265 "data_offset": 2048, 00:25:28.265 "data_size": 63488 00:25:28.265 }, 00:25:28.265 { 00:25:28.265 "name": "BaseBdev2", 00:25:28.265 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:28.265 "is_configured": true, 00:25:28.265 "data_offset": 2048, 00:25:28.265 "data_size": 63488 00:25:28.265 }, 00:25:28.265 { 00:25:28.265 "name": "BaseBdev3", 00:25:28.265 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:28.265 "is_configured": true, 00:25:28.265 "data_offset": 2048, 00:25:28.265 "data_size": 63488 00:25:28.265 } 00:25:28.265 ] 00:25:28.265 }' 00:25:28.265 05:07:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.265 05:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:28.832 05:07:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:29.091 [2024-04-27 05:07:58.824625] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:29.091 [2024-04-27 05:07:58.824710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:29.091 [2024-04-27 05:07:58.831126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:25:29.091 [2024-04-27 05:07:58.834138] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:29.091 05:07:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.026 05:07:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.284 05:08:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:30.284 "name": "raid_bdev1", 00:25:30.284 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:30.284 "strip_size_kb": 64, 00:25:30.284 "state": "online", 00:25:30.284 "raid_level": "raid5f", 00:25:30.284 "superblock": true, 00:25:30.284 "num_base_bdevs": 3, 00:25:30.284 "num_base_bdevs_discovered": 3, 00:25:30.284 "num_base_bdevs_operational": 3, 00:25:30.284 "process": { 00:25:30.284 "type": "rebuild", 00:25:30.284 "target": "spare", 00:25:30.284 "progress": { 00:25:30.284 "blocks": 24576, 00:25:30.284 "percent": 19 00:25:30.284 } 00:25:30.284 }, 00:25:30.284 "base_bdevs_list": [ 00:25:30.284 { 00:25:30.284 "name": "spare", 00:25:30.284 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:30.284 "is_configured": true, 00:25:30.284 "data_offset": 2048, 00:25:30.284 "data_size": 63488 00:25:30.284 }, 00:25:30.284 { 00:25:30.284 "name": "BaseBdev2", 00:25:30.284 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:30.284 "is_configured": true, 00:25:30.284 "data_offset": 2048, 00:25:30.284 "data_size": 63488 00:25:30.284 }, 00:25:30.284 { 00:25:30.284 "name": "BaseBdev3", 00:25:30.284 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:30.284 "is_configured": true, 00:25:30.284 "data_offset": 2048, 00:25:30.284 "data_size": 63488 00:25:30.284 } 00:25:30.284 ] 00:25:30.284 }' 00:25:30.284 05:08:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:30.284 05:08:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:30.284 05:08:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:30.542 05:08:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:30.542 05:08:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:30.542 [2024-04-27 05:08:00.428576] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:30.800 [2024-04-27 05:08:00.456425] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:30.800 [2024-04-27 05:08:00.456638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.800 05:08:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.058 05:08:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.058 "name": "raid_bdev1", 00:25:31.058 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:31.058 "strip_size_kb": 64, 00:25:31.058 "state": "online", 00:25:31.058 "raid_level": "raid5f", 00:25:31.058 "superblock": true, 00:25:31.058 "num_base_bdevs": 3, 00:25:31.058 "num_base_bdevs_discovered": 2, 00:25:31.058 "num_base_bdevs_operational": 2, 00:25:31.058 "base_bdevs_list": [ 00:25:31.058 { 00:25:31.058 "name": null, 00:25:31.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.058 "is_configured": false, 00:25:31.058 "data_offset": 2048, 00:25:31.058 "data_size": 63488 00:25:31.058 }, 00:25:31.058 { 00:25:31.058 "name": "BaseBdev2", 00:25:31.058 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:31.058 "is_configured": true, 00:25:31.058 "data_offset": 2048, 00:25:31.058 "data_size": 63488 00:25:31.058 }, 00:25:31.058 { 00:25:31.058 "name": "BaseBdev3", 00:25:31.058 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:31.058 "is_configured": true, 00:25:31.058 "data_offset": 2048, 00:25:31.058 "data_size": 63488 00:25:31.058 } 00:25:31.058 ] 00:25:31.058 }' 00:25:31.058 05:08:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.058 05:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.625 05:08:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:31.884 "name": "raid_bdev1", 00:25:31.884 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:31.884 "strip_size_kb": 64, 00:25:31.884 "state": "online", 00:25:31.884 "raid_level": "raid5f", 00:25:31.884 "superblock": true, 00:25:31.884 "num_base_bdevs": 3, 00:25:31.884 "num_base_bdevs_discovered": 2, 00:25:31.884 "num_base_bdevs_operational": 2, 00:25:31.884 "base_bdevs_list": [ 00:25:31.884 { 00:25:31.884 "name": null, 00:25:31.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.884 "is_configured": false, 00:25:31.884 "data_offset": 2048, 00:25:31.884 "data_size": 63488 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "name": "BaseBdev2", 00:25:31.884 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:31.884 "is_configured": true, 00:25:31.884 "data_offset": 2048, 00:25:31.884 "data_size": 63488 00:25:31.884 }, 00:25:31.884 { 00:25:31.884 "name": "BaseBdev3", 00:25:31.884 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:31.884 "is_configured": true, 00:25:31.884 "data_offset": 2048, 00:25:31.884 "data_size": 63488 00:25:31.884 } 00:25:31.884 ] 00:25:31.884 }' 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:31.884 05:08:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:32.143 [2024-04-27 05:08:02.005687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:32.143 [2024-04-27 05:08:02.005769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:32.143 [2024-04-27 05:08:02.012282] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:25:32.143 [2024-04-27 05:08:02.015134] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:32.143 05:08:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.516 05:08:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.516 "name": "raid_bdev1", 00:25:33.516 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:33.516 "strip_size_kb": 64, 00:25:33.516 "state": "online", 00:25:33.516 "raid_level": "raid5f", 00:25:33.516 "superblock": true, 00:25:33.516 "num_base_bdevs": 3, 00:25:33.516 "num_base_bdevs_discovered": 3, 00:25:33.516 "num_base_bdevs_operational": 3, 00:25:33.516 "process": { 00:25:33.516 "type": "rebuild", 00:25:33.516 "target": "spare", 00:25:33.517 "progress": { 00:25:33.517 "blocks": 24576, 00:25:33.517 "percent": 19 00:25:33.517 } 00:25:33.517 }, 00:25:33.517 "base_bdevs_list": [ 00:25:33.517 { 00:25:33.517 "name": "spare", 00:25:33.517 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:33.517 "is_configured": true, 00:25:33.517 "data_offset": 2048, 00:25:33.517 "data_size": 63488 00:25:33.517 }, 00:25:33.517 { 00:25:33.517 "name": "BaseBdev2", 00:25:33.517 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:33.517 "is_configured": true, 00:25:33.517 "data_offset": 2048, 00:25:33.517 "data_size": 63488 00:25:33.517 }, 00:25:33.517 { 00:25:33.517 "name": "BaseBdev3", 00:25:33.517 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:33.517 "is_configured": true, 00:25:33.517 "data_offset": 2048, 00:25:33.517 "data_size": 63488 00:25:33.517 } 00:25:33.517 ] 00:25:33.517 }' 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:33.517 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@657 -- # local timeout=649 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.517 05:08:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.774 05:08:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.774 "name": "raid_bdev1", 00:25:33.774 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:33.774 "strip_size_kb": 64, 00:25:33.774 "state": "online", 00:25:33.774 "raid_level": "raid5f", 00:25:33.774 "superblock": true, 00:25:33.774 "num_base_bdevs": 3, 00:25:33.774 "num_base_bdevs_discovered": 3, 00:25:33.774 "num_base_bdevs_operational": 3, 00:25:33.774 "process": { 00:25:33.774 "type": "rebuild", 00:25:33.774 "target": "spare", 00:25:33.774 "progress": { 00:25:33.774 "blocks": 32768, 00:25:33.774 "percent": 25 00:25:33.774 } 00:25:33.774 }, 00:25:33.774 "base_bdevs_list": [ 00:25:33.774 { 00:25:33.774 "name": "spare", 00:25:33.774 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:33.774 "is_configured": true, 00:25:33.774 "data_offset": 2048, 00:25:33.774 "data_size": 63488 00:25:33.774 }, 00:25:33.774 { 00:25:33.774 "name": "BaseBdev2", 00:25:33.774 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:33.774 "is_configured": true, 00:25:33.774 "data_offset": 2048, 00:25:33.774 "data_size": 63488 00:25:33.774 }, 00:25:33.774 { 00:25:33.774 "name": "BaseBdev3", 00:25:33.774 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:33.774 "is_configured": true, 00:25:33.774 "data_offset": 2048, 00:25:33.774 "data_size": 63488 00:25:33.774 } 00:25:33.774 ] 00:25:33.774 }' 00:25:33.774 05:08:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:34.033 05:08:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:34.033 05:08:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.033 05:08:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.033 05:08:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.975 05:08:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.234 05:08:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.234 "name": "raid_bdev1", 00:25:35.234 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:35.234 "strip_size_kb": 64, 00:25:35.234 "state": "online", 00:25:35.234 "raid_level": "raid5f", 00:25:35.234 "superblock": true, 00:25:35.234 "num_base_bdevs": 3, 00:25:35.234 "num_base_bdevs_discovered": 3, 00:25:35.234 "num_base_bdevs_operational": 3, 00:25:35.234 "process": { 00:25:35.234 "type": "rebuild", 00:25:35.234 "target": "spare", 00:25:35.234 "progress": { 00:25:35.234 "blocks": 61440, 00:25:35.234 "percent": 48 00:25:35.234 } 00:25:35.234 }, 00:25:35.234 "base_bdevs_list": [ 00:25:35.234 { 00:25:35.234 "name": "spare", 00:25:35.234 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:35.234 "is_configured": true, 00:25:35.234 "data_offset": 2048, 00:25:35.234 "data_size": 63488 00:25:35.234 }, 00:25:35.234 { 00:25:35.234 "name": "BaseBdev2", 00:25:35.234 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:35.234 "is_configured": true, 00:25:35.234 "data_offset": 2048, 00:25:35.234 "data_size": 63488 00:25:35.234 }, 00:25:35.234 { 00:25:35.234 "name": "BaseBdev3", 00:25:35.234 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:35.234 "is_configured": true, 00:25:35.234 "data_offset": 2048, 00:25:35.234 "data_size": 63488 00:25:35.234 } 00:25:35.234 ] 00:25:35.234 }' 00:25:35.234 05:08:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.493 05:08:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.493 05:08:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.493 05:08:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.493 05:08:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.430 05:08:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.689 "name": "raid_bdev1", 00:25:36.689 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:36.689 "strip_size_kb": 64, 00:25:36.689 "state": "online", 00:25:36.689 "raid_level": "raid5f", 00:25:36.689 "superblock": true, 00:25:36.689 "num_base_bdevs": 3, 00:25:36.689 "num_base_bdevs_discovered": 3, 00:25:36.689 "num_base_bdevs_operational": 3, 00:25:36.689 "process": { 00:25:36.689 "type": "rebuild", 00:25:36.689 "target": "spare", 00:25:36.689 "progress": { 00:25:36.689 "blocks": 88064, 00:25:36.689 "percent": 69 00:25:36.689 } 00:25:36.689 }, 00:25:36.689 "base_bdevs_list": [ 00:25:36.689 { 00:25:36.689 "name": "spare", 00:25:36.689 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:36.689 "is_configured": true, 00:25:36.689 "data_offset": 2048, 00:25:36.689 "data_size": 63488 00:25:36.689 }, 00:25:36.689 { 00:25:36.689 "name": "BaseBdev2", 00:25:36.689 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:36.689 "is_configured": true, 00:25:36.689 "data_offset": 2048, 00:25:36.689 "data_size": 63488 00:25:36.689 }, 00:25:36.689 { 00:25:36.689 "name": "BaseBdev3", 00:25:36.689 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:36.689 "is_configured": true, 00:25:36.689 "data_offset": 2048, 00:25:36.689 "data_size": 63488 00:25:36.689 } 00:25:36.689 ] 00:25:36.689 }' 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.689 05:08:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.065 "name": "raid_bdev1", 00:25:38.065 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:38.065 "strip_size_kb": 64, 00:25:38.065 "state": "online", 00:25:38.065 "raid_level": "raid5f", 00:25:38.065 "superblock": true, 00:25:38.065 "num_base_bdevs": 3, 00:25:38.065 "num_base_bdevs_discovered": 3, 00:25:38.065 "num_base_bdevs_operational": 3, 00:25:38.065 "process": { 00:25:38.065 "type": "rebuild", 00:25:38.065 "target": "spare", 00:25:38.065 "progress": { 00:25:38.065 "blocks": 116736, 00:25:38.065 "percent": 91 00:25:38.065 } 00:25:38.065 }, 00:25:38.065 "base_bdevs_list": [ 00:25:38.065 { 00:25:38.065 "name": "spare", 00:25:38.065 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:38.065 "is_configured": true, 00:25:38.065 "data_offset": 2048, 00:25:38.065 "data_size": 63488 00:25:38.065 }, 00:25:38.065 { 00:25:38.065 "name": "BaseBdev2", 00:25:38.065 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:38.065 "is_configured": true, 00:25:38.065 "data_offset": 2048, 00:25:38.065 "data_size": 63488 00:25:38.065 }, 00:25:38.065 { 00:25:38.065 "name": "BaseBdev3", 00:25:38.065 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:38.065 "is_configured": true, 00:25:38.065 "data_offset": 2048, 00:25:38.065 "data_size": 63488 00:25:38.065 } 00:25:38.065 ] 00:25:38.065 }' 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.065 05:08:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.633 [2024-04-27 05:08:08.300066] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:38.633 [2024-04-27 05:08:08.300197] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:38.633 [2024-04-27 05:08:08.300480] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.199 05:08:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.458 "name": "raid_bdev1", 00:25:39.458 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:39.458 "strip_size_kb": 64, 00:25:39.458 "state": "online", 00:25:39.458 "raid_level": "raid5f", 00:25:39.458 "superblock": true, 00:25:39.458 "num_base_bdevs": 3, 00:25:39.458 "num_base_bdevs_discovered": 3, 00:25:39.458 "num_base_bdevs_operational": 3, 00:25:39.458 "base_bdevs_list": [ 00:25:39.458 { 00:25:39.458 "name": "spare", 00:25:39.458 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:39.458 "is_configured": true, 00:25:39.458 "data_offset": 2048, 00:25:39.458 "data_size": 63488 00:25:39.458 }, 00:25:39.458 { 00:25:39.458 "name": "BaseBdev2", 00:25:39.458 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:39.458 "is_configured": true, 00:25:39.458 "data_offset": 2048, 00:25:39.458 "data_size": 63488 00:25:39.458 }, 00:25:39.458 { 00:25:39.458 "name": "BaseBdev3", 00:25:39.458 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:39.458 "is_configured": true, 00:25:39.458 "data_offset": 2048, 00:25:39.458 "data_size": 63488 00:25:39.458 } 00:25:39.458 ] 00:25:39.458 }' 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@660 -- # break 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.458 05:08:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.716 05:08:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.716 "name": "raid_bdev1", 00:25:39.716 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:39.716 "strip_size_kb": 64, 00:25:39.716 "state": "online", 00:25:39.716 "raid_level": "raid5f", 00:25:39.716 "superblock": true, 00:25:39.716 "num_base_bdevs": 3, 00:25:39.716 "num_base_bdevs_discovered": 3, 00:25:39.716 "num_base_bdevs_operational": 3, 00:25:39.716 "base_bdevs_list": [ 00:25:39.716 { 00:25:39.716 "name": "spare", 00:25:39.716 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:39.716 "is_configured": true, 00:25:39.716 "data_offset": 2048, 00:25:39.716 "data_size": 63488 00:25:39.716 }, 00:25:39.716 { 00:25:39.716 "name": "BaseBdev2", 00:25:39.716 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:39.716 "is_configured": true, 00:25:39.716 "data_offset": 2048, 00:25:39.716 "data_size": 63488 00:25:39.716 }, 00:25:39.716 { 00:25:39.716 "name": "BaseBdev3", 00:25:39.716 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:39.716 "is_configured": true, 00:25:39.717 "data_offset": 2048, 00:25:39.717 "data_size": 63488 00:25:39.717 } 00:25:39.717 ] 00:25:39.717 }' 00:25:39.717 05:08:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.717 05:08:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:39.717 05:08:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.975 05:08:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.233 05:08:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.233 "name": "raid_bdev1", 00:25:40.233 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:40.233 "strip_size_kb": 64, 00:25:40.233 "state": "online", 00:25:40.233 "raid_level": "raid5f", 00:25:40.233 "superblock": true, 00:25:40.233 "num_base_bdevs": 3, 00:25:40.233 "num_base_bdevs_discovered": 3, 00:25:40.233 "num_base_bdevs_operational": 3, 00:25:40.233 "base_bdevs_list": [ 00:25:40.233 { 00:25:40.233 "name": "spare", 00:25:40.233 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:40.233 "is_configured": true, 00:25:40.233 "data_offset": 2048, 00:25:40.233 "data_size": 63488 00:25:40.233 }, 00:25:40.233 { 00:25:40.233 "name": "BaseBdev2", 00:25:40.233 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:40.233 "is_configured": true, 00:25:40.233 "data_offset": 2048, 00:25:40.233 "data_size": 63488 00:25:40.233 }, 00:25:40.233 { 00:25:40.233 "name": "BaseBdev3", 00:25:40.233 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:40.233 "is_configured": true, 00:25:40.233 "data_offset": 2048, 00:25:40.233 "data_size": 63488 00:25:40.233 } 00:25:40.233 ] 00:25:40.233 }' 00:25:40.233 05:08:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.233 05:08:09 -- common/autotest_common.sh@10 -- # set +x 00:25:40.799 05:08:10 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:41.057 [2024-04-27 05:08:10.801355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:41.057 [2024-04-27 05:08:10.801419] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:41.057 [2024-04-27 05:08:10.801558] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:41.057 [2024-04-27 05:08:10.801691] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:41.057 [2024-04-27 05:08:10.801708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:25:41.057 05:08:10 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.057 05:08:10 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:41.315 05:08:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:41.315 05:08:11 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:41.315 05:08:11 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@12 -- # local i 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.315 05:08:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:41.603 /dev/nbd0 00:25:41.603 05:08:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:41.603 05:08:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:41.603 05:08:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:41.603 05:08:11 -- common/autotest_common.sh@857 -- # local i 00:25:41.603 05:08:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:41.603 05:08:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:41.603 05:08:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:41.603 05:08:11 -- common/autotest_common.sh@861 -- # break 00:25:41.603 05:08:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:41.604 05:08:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:41.604 05:08:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.604 1+0 records in 00:25:41.604 1+0 records out 00:25:41.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573515 s, 7.1 MB/s 00:25:41.604 05:08:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.604 05:08:11 -- common/autotest_common.sh@874 -- # size=4096 00:25:41.604 05:08:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.604 05:08:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:41.604 05:08:11 -- common/autotest_common.sh@877 -- # return 0 00:25:41.604 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.604 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.604 05:08:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:41.878 /dev/nbd1 00:25:41.878 05:08:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:41.878 05:08:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:41.878 05:08:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:41.878 05:08:11 -- common/autotest_common.sh@857 -- # local i 00:25:41.878 05:08:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:41.878 05:08:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:41.878 05:08:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:41.878 05:08:11 -- common/autotest_common.sh@861 -- # break 00:25:41.878 05:08:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:41.878 05:08:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:41.878 05:08:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.878 1+0 records in 00:25:41.878 1+0 records out 00:25:41.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717413 s, 5.7 MB/s 00:25:41.878 05:08:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.878 05:08:11 -- common/autotest_common.sh@874 -- # size=4096 00:25:41.878 05:08:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.878 05:08:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:41.878 05:08:11 -- common/autotest_common.sh@877 -- # return 0 00:25:41.878 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.878 05:08:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.878 05:08:11 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:42.136 05:08:11 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@51 -- # local i 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.136 05:08:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@41 -- # break 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.394 05:08:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@41 -- # break 00:25:42.653 05:08:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.653 05:08:12 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:42.653 05:08:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:42.653 05:08:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:42.653 05:08:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:42.911 05:08:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:43.170 [2024-04-27 05:08:12.886502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:43.170 [2024-04-27 05:08:12.886657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.170 [2024-04-27 05:08:12.886707] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:43.170 [2024-04-27 05:08:12.886743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.170 [2024-04-27 05:08:12.889638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.170 [2024-04-27 05:08:12.889725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:43.170 [2024-04-27 05:08:12.889856] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:43.170 [2024-04-27 05:08:12.889933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:43.171 BaseBdev1 00:25:43.171 05:08:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:43.171 05:08:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:43.171 05:08:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:43.429 05:08:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:43.688 [2024-04-27 05:08:13.406603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:43.688 [2024-04-27 05:08:13.406743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.688 [2024-04-27 05:08:13.406800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:43.688 [2024-04-27 05:08:13.406826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.688 [2024-04-27 05:08:13.407401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.688 [2024-04-27 05:08:13.407469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:43.688 [2024-04-27 05:08:13.407582] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:43.688 [2024-04-27 05:08:13.407599] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:43.688 [2024-04-27 05:08:13.407607] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.688 [2024-04-27 05:08:13.407638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:25:43.688 [2024-04-27 05:08:13.407698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:43.688 BaseBdev2 00:25:43.688 05:08:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:43.688 05:08:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:43.688 05:08:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:43.946 05:08:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:44.205 [2024-04-27 05:08:13.914730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:44.205 [2024-04-27 05:08:13.914871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.205 [2024-04-27 05:08:13.914941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:44.205 [2024-04-27 05:08:13.914969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.205 [2024-04-27 05:08:13.915533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.205 [2024-04-27 05:08:13.915605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:44.205 [2024-04-27 05:08:13.915712] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:44.205 [2024-04-27 05:08:13.915742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:44.205 BaseBdev3 00:25:44.205 05:08:13 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:44.463 05:08:14 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:44.722 [2024-04-27 05:08:14.386889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:44.722 [2024-04-27 05:08:14.387027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.722 [2024-04-27 05:08:14.387082] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:44.722 [2024-04-27 05:08:14.387118] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.722 [2024-04-27 05:08:14.387696] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.722 [2024-04-27 05:08:14.387768] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:44.722 [2024-04-27 05:08:14.387882] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:44.722 [2024-04-27 05:08:14.387912] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.722 spare 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.722 05:08:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.722 [2024-04-27 05:08:14.488088] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:25:44.722 [2024-04-27 05:08:14.488146] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:44.722 [2024-04-27 05:08:14.488390] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:25:44.722 [2024-04-27 05:08:14.489427] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:25:44.722 [2024-04-27 05:08:14.489456] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:25:44.722 [2024-04-27 05:08:14.489658] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.982 05:08:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.982 "name": "raid_bdev1", 00:25:44.982 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:44.982 "strip_size_kb": 64, 00:25:44.982 "state": "online", 00:25:44.982 "raid_level": "raid5f", 00:25:44.982 "superblock": true, 00:25:44.982 "num_base_bdevs": 3, 00:25:44.982 "num_base_bdevs_discovered": 3, 00:25:44.982 "num_base_bdevs_operational": 3, 00:25:44.982 "base_bdevs_list": [ 00:25:44.982 { 00:25:44.982 "name": "spare", 00:25:44.982 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:44.982 "is_configured": true, 00:25:44.982 "data_offset": 2048, 00:25:44.982 "data_size": 63488 00:25:44.982 }, 00:25:44.982 { 00:25:44.982 "name": "BaseBdev2", 00:25:44.982 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:44.982 "is_configured": true, 00:25:44.982 "data_offset": 2048, 00:25:44.982 "data_size": 63488 00:25:44.982 }, 00:25:44.982 { 00:25:44.982 "name": "BaseBdev3", 00:25:44.982 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:44.982 "is_configured": true, 00:25:44.982 "data_offset": 2048, 00:25:44.982 "data_size": 63488 00:25:44.982 } 00:25:44.982 ] 00:25:44.982 }' 00:25:44.982 05:08:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.982 05:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.548 05:08:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.807 "name": "raid_bdev1", 00:25:45.807 "uuid": "bec971e4-c0e1-426a-9d62-ccf2c66b4c67", 00:25:45.807 "strip_size_kb": 64, 00:25:45.807 "state": "online", 00:25:45.807 "raid_level": "raid5f", 00:25:45.807 "superblock": true, 00:25:45.807 "num_base_bdevs": 3, 00:25:45.807 "num_base_bdevs_discovered": 3, 00:25:45.807 "num_base_bdevs_operational": 3, 00:25:45.807 "base_bdevs_list": [ 00:25:45.807 { 00:25:45.807 "name": "spare", 00:25:45.807 "uuid": "17de7447-b31e-5670-b5f6-431883bf617b", 00:25:45.807 "is_configured": true, 00:25:45.807 "data_offset": 2048, 00:25:45.807 "data_size": 63488 00:25:45.807 }, 00:25:45.807 { 00:25:45.807 "name": "BaseBdev2", 00:25:45.807 "uuid": "7478abee-1b05-51e4-9a49-777c7a1aaf60", 00:25:45.807 "is_configured": true, 00:25:45.807 "data_offset": 2048, 00:25:45.807 "data_size": 63488 00:25:45.807 }, 00:25:45.807 { 00:25:45.807 "name": "BaseBdev3", 00:25:45.807 "uuid": "54dae698-43c3-539c-93f7-1f753b0e8018", 00:25:45.807 "is_configured": true, 00:25:45.807 "data_offset": 2048, 00:25:45.807 "data_size": 63488 00:25:45.807 } 00:25:45.807 ] 00:25:45.807 }' 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.807 05:08:15 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:46.064 05:08:15 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.064 05:08:15 -- bdev/bdev_raid.sh@709 -- # killprocess 141237 00:25:46.064 05:08:15 -- common/autotest_common.sh@926 -- # '[' -z 141237 ']' 00:25:46.064 05:08:15 -- common/autotest_common.sh@930 -- # kill -0 141237 00:25:46.064 05:08:15 -- common/autotest_common.sh@931 -- # uname 00:25:46.064 05:08:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.064 05:08:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141237 00:25:46.064 05:08:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:46.064 05:08:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:46.064 05:08:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141237' 00:25:46.064 killing process with pid 141237 00:25:46.064 05:08:15 -- common/autotest_common.sh@945 -- # kill 141237 00:25:46.064 Received shutdown signal, test time was about 60.000000 seconds 00:25:46.064 00:25:46.064 Latency(us) 00:25:46.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.064 =================================================================================================================== 00:25:46.064 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:46.064 05:08:15 -- common/autotest_common.sh@950 -- # wait 141237 00:25:46.064 [2024-04-27 05:08:15.926031] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:46.064 [2024-04-27 05:08:15.926163] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.064 [2024-04-27 05:08:15.926270] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.064 [2024-04-27 05:08:15.926295] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:25:46.321 [2024-04-27 05:08:16.008277] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:46.579 00:25:46.579 real 0m25.054s 00:25:46.579 user 0m39.902s 00:25:46.579 sys 0m3.387s 00:25:46.579 05:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.579 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.579 ************************************ 00:25:46.579 END TEST raid5f_rebuild_test_sb 00:25:46.579 ************************************ 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:25:46.579 05:08:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:46.579 05:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.579 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.579 ************************************ 00:25:46.579 START TEST raid5f_state_function_test 00:25:46.579 ************************************ 00:25:46.579 05:08:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:46.579 05:08:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=141880 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141880' 00:25:46.580 Process raid pid: 141880 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:46.580 05:08:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141880 /var/tmp/spdk-raid.sock 00:25:46.580 05:08:16 -- common/autotest_common.sh@819 -- # '[' -z 141880 ']' 00:25:46.580 05:08:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:46.580 05:08:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:46.580 05:08:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:46.580 05:08:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:46.580 05:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.838 [2024-04-27 05:08:16.497087] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:46.838 [2024-04-27 05:08:16.497314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.838 [2024-04-27 05:08:16.656766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.096 [2024-04-27 05:08:16.779357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.096 [2024-04-27 05:08:16.858839] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:47.663 05:08:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:47.663 05:08:17 -- common/autotest_common.sh@852 -- # return 0 00:25:47.663 05:08:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:47.922 [2024-04-27 05:08:17.688242] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:47.922 [2024-04-27 05:08:17.688367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:47.922 [2024-04-27 05:08:17.688383] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:47.922 [2024-04-27 05:08:17.688411] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:47.922 [2024-04-27 05:08:17.688419] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:47.922 [2024-04-27 05:08:17.688466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:47.922 [2024-04-27 05:08:17.688476] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:47.922 [2024-04-27 05:08:17.688503] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.922 05:08:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.179 05:08:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:48.179 "name": "Existed_Raid", 00:25:48.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.179 "strip_size_kb": 64, 00:25:48.179 "state": "configuring", 00:25:48.179 "raid_level": "raid5f", 00:25:48.179 "superblock": false, 00:25:48.179 "num_base_bdevs": 4, 00:25:48.179 "num_base_bdevs_discovered": 0, 00:25:48.179 "num_base_bdevs_operational": 4, 00:25:48.179 "base_bdevs_list": [ 00:25:48.179 { 00:25:48.179 "name": "BaseBdev1", 00:25:48.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.179 "is_configured": false, 00:25:48.179 "data_offset": 0, 00:25:48.179 "data_size": 0 00:25:48.179 }, 00:25:48.179 { 00:25:48.180 "name": "BaseBdev2", 00:25:48.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.180 "is_configured": false, 00:25:48.180 "data_offset": 0, 00:25:48.180 "data_size": 0 00:25:48.180 }, 00:25:48.180 { 00:25:48.180 "name": "BaseBdev3", 00:25:48.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.180 "is_configured": false, 00:25:48.180 "data_offset": 0, 00:25:48.180 "data_size": 0 00:25:48.180 }, 00:25:48.180 { 00:25:48.180 "name": "BaseBdev4", 00:25:48.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.180 "is_configured": false, 00:25:48.180 "data_offset": 0, 00:25:48.180 "data_size": 0 00:25:48.180 } 00:25:48.180 ] 00:25:48.180 }' 00:25:48.180 05:08:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:48.180 05:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:48.744 05:08:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:49.002 [2024-04-27 05:08:18.896371] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:49.002 [2024-04-27 05:08:18.896440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:25:49.273 05:08:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:49.273 [2024-04-27 05:08:19.140469] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:49.273 [2024-04-27 05:08:19.140600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:49.273 [2024-04-27 05:08:19.140617] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:49.273 [2024-04-27 05:08:19.140647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:49.273 [2024-04-27 05:08:19.140656] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:49.273 [2024-04-27 05:08:19.140724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:49.273 [2024-04-27 05:08:19.140739] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:49.273 [2024-04-27 05:08:19.140774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:49.273 05:08:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:49.550 [2024-04-27 05:08:19.400156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:49.550 BaseBdev1 00:25:49.550 05:08:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:49.550 05:08:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:49.550 05:08:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:49.550 05:08:19 -- common/autotest_common.sh@889 -- # local i 00:25:49.550 05:08:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:49.550 05:08:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:49.550 05:08:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.808 05:08:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:50.066 [ 00:25:50.066 { 00:25:50.066 "name": "BaseBdev1", 00:25:50.066 "aliases": [ 00:25:50.066 "0defa468-7a5f-4267-9ef0-109cb602419e" 00:25:50.066 ], 00:25:50.066 "product_name": "Malloc disk", 00:25:50.066 "block_size": 512, 00:25:50.066 "num_blocks": 65536, 00:25:50.066 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:50.066 "assigned_rate_limits": { 00:25:50.066 "rw_ios_per_sec": 0, 00:25:50.066 "rw_mbytes_per_sec": 0, 00:25:50.066 "r_mbytes_per_sec": 0, 00:25:50.066 "w_mbytes_per_sec": 0 00:25:50.066 }, 00:25:50.066 "claimed": true, 00:25:50.066 "claim_type": "exclusive_write", 00:25:50.066 "zoned": false, 00:25:50.066 "supported_io_types": { 00:25:50.066 "read": true, 00:25:50.066 "write": true, 00:25:50.066 "unmap": true, 00:25:50.066 "write_zeroes": true, 00:25:50.066 "flush": true, 00:25:50.066 "reset": true, 00:25:50.066 "compare": false, 00:25:50.066 "compare_and_write": false, 00:25:50.066 "abort": true, 00:25:50.066 "nvme_admin": false, 00:25:50.066 "nvme_io": false 00:25:50.066 }, 00:25:50.066 "memory_domains": [ 00:25:50.066 { 00:25:50.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.066 "dma_device_type": 2 00:25:50.066 } 00:25:50.066 ], 00:25:50.066 "driver_specific": {} 00:25:50.066 } 00:25:50.066 ] 00:25:50.066 05:08:19 -- common/autotest_common.sh@895 -- # return 0 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.066 05:08:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.324 05:08:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.324 "name": "Existed_Raid", 00:25:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.324 "strip_size_kb": 64, 00:25:50.324 "state": "configuring", 00:25:50.324 "raid_level": "raid5f", 00:25:50.324 "superblock": false, 00:25:50.324 "num_base_bdevs": 4, 00:25:50.324 "num_base_bdevs_discovered": 1, 00:25:50.324 "num_base_bdevs_operational": 4, 00:25:50.324 "base_bdevs_list": [ 00:25:50.324 { 00:25:50.324 "name": "BaseBdev1", 00:25:50.324 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:50.324 "is_configured": true, 00:25:50.324 "data_offset": 0, 00:25:50.324 "data_size": 65536 00:25:50.324 }, 00:25:50.324 { 00:25:50.324 "name": "BaseBdev2", 00:25:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.324 "is_configured": false, 00:25:50.324 "data_offset": 0, 00:25:50.324 "data_size": 0 00:25:50.324 }, 00:25:50.324 { 00:25:50.324 "name": "BaseBdev3", 00:25:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.324 "is_configured": false, 00:25:50.324 "data_offset": 0, 00:25:50.324 "data_size": 0 00:25:50.324 }, 00:25:50.324 { 00:25:50.324 "name": "BaseBdev4", 00:25:50.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.324 "is_configured": false, 00:25:50.324 "data_offset": 0, 00:25:50.324 "data_size": 0 00:25:50.324 } 00:25:50.324 ] 00:25:50.324 }' 00:25:50.324 05:08:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.324 05:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:51.260 05:08:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:51.260 [2024-04-27 05:08:21.064659] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.260 [2024-04-27 05:08:21.064788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:51.260 05:08:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:51.260 05:08:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:51.518 [2024-04-27 05:08:21.308843] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.518 [2024-04-27 05:08:21.311332] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.518 [2024-04-27 05:08:21.311430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.518 [2024-04-27 05:08:21.311445] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:51.518 [2024-04-27 05:08:21.311476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:51.518 [2024-04-27 05:08:21.311485] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:51.518 [2024-04-27 05:08:21.311505] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:51.518 05:08:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.519 05:08:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.777 05:08:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.777 "name": "Existed_Raid", 00:25:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.777 "strip_size_kb": 64, 00:25:51.777 "state": "configuring", 00:25:51.777 "raid_level": "raid5f", 00:25:51.777 "superblock": false, 00:25:51.777 "num_base_bdevs": 4, 00:25:51.777 "num_base_bdevs_discovered": 1, 00:25:51.777 "num_base_bdevs_operational": 4, 00:25:51.777 "base_bdevs_list": [ 00:25:51.777 { 00:25:51.777 "name": "BaseBdev1", 00:25:51.777 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:51.777 "is_configured": true, 00:25:51.777 "data_offset": 0, 00:25:51.777 "data_size": 65536 00:25:51.777 }, 00:25:51.777 { 00:25:51.777 "name": "BaseBdev2", 00:25:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.777 "is_configured": false, 00:25:51.777 "data_offset": 0, 00:25:51.777 "data_size": 0 00:25:51.777 }, 00:25:51.777 { 00:25:51.777 "name": "BaseBdev3", 00:25:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.777 "is_configured": false, 00:25:51.777 "data_offset": 0, 00:25:51.777 "data_size": 0 00:25:51.777 }, 00:25:51.777 { 00:25:51.777 "name": "BaseBdev4", 00:25:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.777 "is_configured": false, 00:25:51.777 "data_offset": 0, 00:25:51.777 "data_size": 0 00:25:51.777 } 00:25:51.777 ] 00:25:51.777 }' 00:25:51.777 05:08:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.777 05:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:52.709 05:08:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:52.709 [2024-04-27 05:08:22.526335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.709 BaseBdev2 00:25:52.709 05:08:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:52.709 05:08:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:52.709 05:08:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:52.709 05:08:22 -- common/autotest_common.sh@889 -- # local i 00:25:52.709 05:08:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:52.709 05:08:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:52.709 05:08:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:52.966 05:08:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:53.223 [ 00:25:53.223 { 00:25:53.223 "name": "BaseBdev2", 00:25:53.223 "aliases": [ 00:25:53.223 "ace8d42f-fa9b-4c50-985a-54a532685a1c" 00:25:53.223 ], 00:25:53.223 "product_name": "Malloc disk", 00:25:53.223 "block_size": 512, 00:25:53.223 "num_blocks": 65536, 00:25:53.223 "uuid": "ace8d42f-fa9b-4c50-985a-54a532685a1c", 00:25:53.223 "assigned_rate_limits": { 00:25:53.223 "rw_ios_per_sec": 0, 00:25:53.223 "rw_mbytes_per_sec": 0, 00:25:53.223 "r_mbytes_per_sec": 0, 00:25:53.223 "w_mbytes_per_sec": 0 00:25:53.223 }, 00:25:53.223 "claimed": true, 00:25:53.223 "claim_type": "exclusive_write", 00:25:53.223 "zoned": false, 00:25:53.223 "supported_io_types": { 00:25:53.223 "read": true, 00:25:53.223 "write": true, 00:25:53.223 "unmap": true, 00:25:53.223 "write_zeroes": true, 00:25:53.223 "flush": true, 00:25:53.223 "reset": true, 00:25:53.223 "compare": false, 00:25:53.223 "compare_and_write": false, 00:25:53.223 "abort": true, 00:25:53.223 "nvme_admin": false, 00:25:53.223 "nvme_io": false 00:25:53.223 }, 00:25:53.223 "memory_domains": [ 00:25:53.223 { 00:25:53.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.223 "dma_device_type": 2 00:25:53.223 } 00:25:53.223 ], 00:25:53.223 "driver_specific": {} 00:25:53.223 } 00:25:53.223 ] 00:25:53.223 05:08:23 -- common/autotest_common.sh@895 -- # return 0 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.223 05:08:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.479 05:08:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.479 "name": "Existed_Raid", 00:25:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.479 "strip_size_kb": 64, 00:25:53.479 "state": "configuring", 00:25:53.479 "raid_level": "raid5f", 00:25:53.479 "superblock": false, 00:25:53.479 "num_base_bdevs": 4, 00:25:53.479 "num_base_bdevs_discovered": 2, 00:25:53.479 "num_base_bdevs_operational": 4, 00:25:53.479 "base_bdevs_list": [ 00:25:53.479 { 00:25:53.479 "name": "BaseBdev1", 00:25:53.479 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:53.479 "is_configured": true, 00:25:53.479 "data_offset": 0, 00:25:53.479 "data_size": 65536 00:25:53.479 }, 00:25:53.479 { 00:25:53.479 "name": "BaseBdev2", 00:25:53.479 "uuid": "ace8d42f-fa9b-4c50-985a-54a532685a1c", 00:25:53.479 "is_configured": true, 00:25:53.479 "data_offset": 0, 00:25:53.479 "data_size": 65536 00:25:53.479 }, 00:25:53.479 { 00:25:53.479 "name": "BaseBdev3", 00:25:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.479 "is_configured": false, 00:25:53.479 "data_offset": 0, 00:25:53.479 "data_size": 0 00:25:53.479 }, 00:25:53.479 { 00:25:53.479 "name": "BaseBdev4", 00:25:53.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.479 "is_configured": false, 00:25:53.479 "data_offset": 0, 00:25:53.479 "data_size": 0 00:25:53.479 } 00:25:53.479 ] 00:25:53.479 }' 00:25:53.479 05:08:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.479 05:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:54.411 05:08:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:54.411 [2024-04-27 05:08:24.227337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.411 BaseBdev3 00:25:54.411 05:08:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:54.411 05:08:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:54.411 05:08:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:54.411 05:08:24 -- common/autotest_common.sh@889 -- # local i 00:25:54.411 05:08:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:54.411 05:08:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:54.411 05:08:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.669 05:08:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:54.926 [ 00:25:54.926 { 00:25:54.926 "name": "BaseBdev3", 00:25:54.926 "aliases": [ 00:25:54.926 "209ce542-a6d4-40d2-a44f-f338e90d46e5" 00:25:54.926 ], 00:25:54.926 "product_name": "Malloc disk", 00:25:54.926 "block_size": 512, 00:25:54.926 "num_blocks": 65536, 00:25:54.926 "uuid": "209ce542-a6d4-40d2-a44f-f338e90d46e5", 00:25:54.926 "assigned_rate_limits": { 00:25:54.926 "rw_ios_per_sec": 0, 00:25:54.926 "rw_mbytes_per_sec": 0, 00:25:54.926 "r_mbytes_per_sec": 0, 00:25:54.926 "w_mbytes_per_sec": 0 00:25:54.926 }, 00:25:54.926 "claimed": true, 00:25:54.926 "claim_type": "exclusive_write", 00:25:54.926 "zoned": false, 00:25:54.926 "supported_io_types": { 00:25:54.926 "read": true, 00:25:54.926 "write": true, 00:25:54.926 "unmap": true, 00:25:54.926 "write_zeroes": true, 00:25:54.926 "flush": true, 00:25:54.926 "reset": true, 00:25:54.926 "compare": false, 00:25:54.926 "compare_and_write": false, 00:25:54.926 "abort": true, 00:25:54.926 "nvme_admin": false, 00:25:54.926 "nvme_io": false 00:25:54.926 }, 00:25:54.926 "memory_domains": [ 00:25:54.926 { 00:25:54.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.926 "dma_device_type": 2 00:25:54.926 } 00:25:54.926 ], 00:25:54.926 "driver_specific": {} 00:25:54.926 } 00:25:54.926 ] 00:25:54.926 05:08:24 -- common/autotest_common.sh@895 -- # return 0 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.926 05:08:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.186 05:08:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.186 "name": "Existed_Raid", 00:25:55.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.186 "strip_size_kb": 64, 00:25:55.186 "state": "configuring", 00:25:55.186 "raid_level": "raid5f", 00:25:55.186 "superblock": false, 00:25:55.186 "num_base_bdevs": 4, 00:25:55.186 "num_base_bdevs_discovered": 3, 00:25:55.186 "num_base_bdevs_operational": 4, 00:25:55.186 "base_bdevs_list": [ 00:25:55.186 { 00:25:55.186 "name": "BaseBdev1", 00:25:55.186 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:55.186 "is_configured": true, 00:25:55.186 "data_offset": 0, 00:25:55.186 "data_size": 65536 00:25:55.186 }, 00:25:55.186 { 00:25:55.186 "name": "BaseBdev2", 00:25:55.186 "uuid": "ace8d42f-fa9b-4c50-985a-54a532685a1c", 00:25:55.186 "is_configured": true, 00:25:55.186 "data_offset": 0, 00:25:55.186 "data_size": 65536 00:25:55.186 }, 00:25:55.186 { 00:25:55.186 "name": "BaseBdev3", 00:25:55.186 "uuid": "209ce542-a6d4-40d2-a44f-f338e90d46e5", 00:25:55.186 "is_configured": true, 00:25:55.186 "data_offset": 0, 00:25:55.186 "data_size": 65536 00:25:55.186 }, 00:25:55.186 { 00:25:55.186 "name": "BaseBdev4", 00:25:55.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.186 "is_configured": false, 00:25:55.186 "data_offset": 0, 00:25:55.186 "data_size": 0 00:25:55.186 } 00:25:55.186 ] 00:25:55.186 }' 00:25:55.186 05:08:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.186 05:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:56.122 05:08:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:56.122 [2024-04-27 05:08:25.956652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:56.122 [2024-04-27 05:08:25.956746] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:25:56.122 [2024-04-27 05:08:25.956760] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:56.122 [2024-04-27 05:08:25.956973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:56.122 [2024-04-27 05:08:25.957912] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:25:56.122 [2024-04-27 05:08:25.957942] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:25:56.122 [2024-04-27 05:08:25.958235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.122 BaseBdev4 00:25:56.122 05:08:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:56.122 05:08:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:56.122 05:08:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:56.122 05:08:25 -- common/autotest_common.sh@889 -- # local i 00:25:56.122 05:08:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:56.122 05:08:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:56.122 05:08:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:56.379 05:08:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:56.638 [ 00:25:56.638 { 00:25:56.638 "name": "BaseBdev4", 00:25:56.638 "aliases": [ 00:25:56.638 "97549ce4-b43a-44f5-8f4e-35fdd9e6c05a" 00:25:56.638 ], 00:25:56.638 "product_name": "Malloc disk", 00:25:56.638 "block_size": 512, 00:25:56.638 "num_blocks": 65536, 00:25:56.638 "uuid": "97549ce4-b43a-44f5-8f4e-35fdd9e6c05a", 00:25:56.638 "assigned_rate_limits": { 00:25:56.638 "rw_ios_per_sec": 0, 00:25:56.638 "rw_mbytes_per_sec": 0, 00:25:56.638 "r_mbytes_per_sec": 0, 00:25:56.638 "w_mbytes_per_sec": 0 00:25:56.638 }, 00:25:56.638 "claimed": true, 00:25:56.638 "claim_type": "exclusive_write", 00:25:56.638 "zoned": false, 00:25:56.638 "supported_io_types": { 00:25:56.638 "read": true, 00:25:56.638 "write": true, 00:25:56.638 "unmap": true, 00:25:56.638 "write_zeroes": true, 00:25:56.638 "flush": true, 00:25:56.638 "reset": true, 00:25:56.638 "compare": false, 00:25:56.638 "compare_and_write": false, 00:25:56.638 "abort": true, 00:25:56.638 "nvme_admin": false, 00:25:56.638 "nvme_io": false 00:25:56.638 }, 00:25:56.638 "memory_domains": [ 00:25:56.638 { 00:25:56.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.638 "dma_device_type": 2 00:25:56.638 } 00:25:56.638 ], 00:25:56.638 "driver_specific": {} 00:25:56.638 } 00:25:56.638 ] 00:25:56.638 05:08:26 -- common/autotest_common.sh@895 -- # return 0 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.638 05:08:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.896 05:08:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.896 "name": "Existed_Raid", 00:25:56.896 "uuid": "bd785e12-6512-4af5-85cb-afcda9ff359c", 00:25:56.896 "strip_size_kb": 64, 00:25:56.896 "state": "online", 00:25:56.896 "raid_level": "raid5f", 00:25:56.896 "superblock": false, 00:25:56.896 "num_base_bdevs": 4, 00:25:56.896 "num_base_bdevs_discovered": 4, 00:25:56.896 "num_base_bdevs_operational": 4, 00:25:56.896 "base_bdevs_list": [ 00:25:56.897 { 00:25:56.897 "name": "BaseBdev1", 00:25:56.897 "uuid": "0defa468-7a5f-4267-9ef0-109cb602419e", 00:25:56.897 "is_configured": true, 00:25:56.897 "data_offset": 0, 00:25:56.897 "data_size": 65536 00:25:56.897 }, 00:25:56.897 { 00:25:56.897 "name": "BaseBdev2", 00:25:56.897 "uuid": "ace8d42f-fa9b-4c50-985a-54a532685a1c", 00:25:56.897 "is_configured": true, 00:25:56.897 "data_offset": 0, 00:25:56.897 "data_size": 65536 00:25:56.897 }, 00:25:56.897 { 00:25:56.897 "name": "BaseBdev3", 00:25:56.897 "uuid": "209ce542-a6d4-40d2-a44f-f338e90d46e5", 00:25:56.897 "is_configured": true, 00:25:56.897 "data_offset": 0, 00:25:56.897 "data_size": 65536 00:25:56.897 }, 00:25:56.897 { 00:25:56.897 "name": "BaseBdev4", 00:25:56.897 "uuid": "97549ce4-b43a-44f5-8f4e-35fdd9e6c05a", 00:25:56.897 "is_configured": true, 00:25:56.897 "data_offset": 0, 00:25:56.897 "data_size": 65536 00:25:56.897 } 00:25:56.897 ] 00:25:56.897 }' 00:25:56.897 05:08:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.897 05:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:57.831 [2024-04-27 05:08:27.682162] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.831 05:08:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.090 05:08:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.090 "name": "Existed_Raid", 00:25:58.090 "uuid": "bd785e12-6512-4af5-85cb-afcda9ff359c", 00:25:58.090 "strip_size_kb": 64, 00:25:58.090 "state": "online", 00:25:58.090 "raid_level": "raid5f", 00:25:58.090 "superblock": false, 00:25:58.090 "num_base_bdevs": 4, 00:25:58.090 "num_base_bdevs_discovered": 3, 00:25:58.090 "num_base_bdevs_operational": 3, 00:25:58.090 "base_bdevs_list": [ 00:25:58.090 { 00:25:58.090 "name": null, 00:25:58.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.090 "is_configured": false, 00:25:58.090 "data_offset": 0, 00:25:58.090 "data_size": 65536 00:25:58.090 }, 00:25:58.090 { 00:25:58.090 "name": "BaseBdev2", 00:25:58.090 "uuid": "ace8d42f-fa9b-4c50-985a-54a532685a1c", 00:25:58.090 "is_configured": true, 00:25:58.090 "data_offset": 0, 00:25:58.090 "data_size": 65536 00:25:58.090 }, 00:25:58.090 { 00:25:58.090 "name": "BaseBdev3", 00:25:58.090 "uuid": "209ce542-a6d4-40d2-a44f-f338e90d46e5", 00:25:58.090 "is_configured": true, 00:25:58.090 "data_offset": 0, 00:25:58.090 "data_size": 65536 00:25:58.090 }, 00:25:58.090 { 00:25:58.090 "name": "BaseBdev4", 00:25:58.090 "uuid": "97549ce4-b43a-44f5-8f4e-35fdd9e6c05a", 00:25:58.090 "is_configured": true, 00:25:58.090 "data_offset": 0, 00:25:58.090 "data_size": 65536 00:25:58.090 } 00:25:58.090 ] 00:25:58.090 }' 00:25:58.090 05:08:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.090 05:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.024 05:08:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:59.282 [2024-04-27 05:08:29.127194] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:59.282 [2024-04-27 05:08:29.127253] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.282 [2024-04-27 05:08:29.127356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.282 05:08:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:59.282 05:08:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:59.282 05:08:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.282 05:08:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:59.540 05:08:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:59.540 05:08:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.540 05:08:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:59.797 [2024-04-27 05:08:29.627307] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:59.797 05:08:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:59.797 05:08:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:59.797 05:08:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:59.797 05:08:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.056 05:08:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:00.056 05:08:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:00.056 05:08:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:00.313 [2024-04-27 05:08:30.185697] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:00.314 [2024-04-27 05:08:30.185786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:26:00.314 05:08:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:00.314 05:08:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:00.572 05:08:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.572 05:08:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:00.572 05:08:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:00.572 05:08:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:00.572 05:08:30 -- bdev/bdev_raid.sh@287 -- # killprocess 141880 00:26:00.572 05:08:30 -- common/autotest_common.sh@926 -- # '[' -z 141880 ']' 00:26:00.572 05:08:30 -- common/autotest_common.sh@930 -- # kill -0 141880 00:26:00.572 05:08:30 -- common/autotest_common.sh@931 -- # uname 00:26:00.572 05:08:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:00.572 05:08:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141880 00:26:00.832 killing process with pid 141880 00:26:00.832 05:08:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:00.832 05:08:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:00.832 05:08:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141880' 00:26:00.832 05:08:30 -- common/autotest_common.sh@945 -- # kill 141880 00:26:00.832 05:08:30 -- common/autotest_common.sh@950 -- # wait 141880 00:26:00.832 [2024-04-27 05:08:30.506629] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:00.832 [2024-04-27 05:08:30.506745] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.090 ************************************ 00:26:01.090 END TEST raid5f_state_function_test 00:26:01.090 ************************************ 00:26:01.090 05:08:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:01.090 00:26:01.090 real 0m14.415s 00:26:01.090 user 0m26.450s 00:26:01.090 sys 0m1.921s 00:26:01.090 05:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:01.090 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:26:01.090 05:08:30 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:26:01.090 05:08:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:26:01.090 05:08:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:01.090 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:26:01.090 ************************************ 00:26:01.090 START TEST raid5f_state_function_test_sb 00:26:01.090 ************************************ 00:26:01.090 05:08:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:26:01.090 05:08:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=142313 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 142313' 00:26:01.091 Process raid pid: 142313 00:26:01.091 05:08:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 142313 /var/tmp/spdk-raid.sock 00:26:01.091 05:08:30 -- common/autotest_common.sh@819 -- # '[' -z 142313 ']' 00:26:01.091 05:08:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:01.091 05:08:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:01.091 05:08:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:01.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:01.091 05:08:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:01.091 05:08:30 -- common/autotest_common.sh@10 -- # set +x 00:26:01.091 [2024-04-27 05:08:30.979461] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:01.091 [2024-04-27 05:08:30.979714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.349 [2024-04-27 05:08:31.145185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.349 [2024-04-27 05:08:31.264324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.608 [2024-04-27 05:08:31.340442] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.175 05:08:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:02.175 05:08:31 -- common/autotest_common.sh@852 -- # return 0 00:26:02.175 05:08:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:02.434 [2024-04-27 05:08:32.186045] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.434 [2024-04-27 05:08:32.186176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.434 [2024-04-27 05:08:32.186192] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.434 [2024-04-27 05:08:32.186219] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.434 [2024-04-27 05:08:32.186228] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:02.434 [2024-04-27 05:08:32.186274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:02.434 [2024-04-27 05:08:32.186284] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:02.434 [2024-04-27 05:08:32.186310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.434 05:08:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.693 05:08:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:02.693 "name": "Existed_Raid", 00:26:02.693 "uuid": "5a17d259-036e-4e29-8196-21113db278f6", 00:26:02.693 "strip_size_kb": 64, 00:26:02.693 "state": "configuring", 00:26:02.693 "raid_level": "raid5f", 00:26:02.693 "superblock": true, 00:26:02.693 "num_base_bdevs": 4, 00:26:02.693 "num_base_bdevs_discovered": 0, 00:26:02.693 "num_base_bdevs_operational": 4, 00:26:02.693 "base_bdevs_list": [ 00:26:02.693 { 00:26:02.693 "name": "BaseBdev1", 00:26:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.693 "is_configured": false, 00:26:02.693 "data_offset": 0, 00:26:02.693 "data_size": 0 00:26:02.693 }, 00:26:02.693 { 00:26:02.693 "name": "BaseBdev2", 00:26:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.693 "is_configured": false, 00:26:02.693 "data_offset": 0, 00:26:02.693 "data_size": 0 00:26:02.693 }, 00:26:02.693 { 00:26:02.693 "name": "BaseBdev3", 00:26:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.693 "is_configured": false, 00:26:02.693 "data_offset": 0, 00:26:02.693 "data_size": 0 00:26:02.693 }, 00:26:02.693 { 00:26:02.693 "name": "BaseBdev4", 00:26:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.693 "is_configured": false, 00:26:02.693 "data_offset": 0, 00:26:02.693 "data_size": 0 00:26:02.693 } 00:26:02.693 ] 00:26:02.693 }' 00:26:02.693 05:08:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:02.693 05:08:32 -- common/autotest_common.sh@10 -- # set +x 00:26:03.260 05:08:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:03.518 [2024-04-27 05:08:33.334099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.518 [2024-04-27 05:08:33.334165] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:26:03.518 05:08:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:03.776 [2024-04-27 05:08:33.602227] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:03.776 [2024-04-27 05:08:33.602328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:03.776 [2024-04-27 05:08:33.602344] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:03.776 [2024-04-27 05:08:33.602376] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:03.776 [2024-04-27 05:08:33.602385] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:03.776 [2024-04-27 05:08:33.602432] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:03.776 [2024-04-27 05:08:33.602442] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:03.776 [2024-04-27 05:08:33.602469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:03.776 05:08:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:04.035 [2024-04-27 05:08:33.861374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:04.035 BaseBdev1 00:26:04.035 05:08:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:04.035 05:08:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:26:04.035 05:08:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:04.035 05:08:33 -- common/autotest_common.sh@889 -- # local i 00:26:04.035 05:08:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:04.035 05:08:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:04.035 05:08:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:04.293 05:08:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:04.552 [ 00:26:04.552 { 00:26:04.552 "name": "BaseBdev1", 00:26:04.552 "aliases": [ 00:26:04.552 "0a3a9e4e-fc19-4412-8ab1-a4989c76706d" 00:26:04.552 ], 00:26:04.552 "product_name": "Malloc disk", 00:26:04.552 "block_size": 512, 00:26:04.552 "num_blocks": 65536, 00:26:04.552 "uuid": "0a3a9e4e-fc19-4412-8ab1-a4989c76706d", 00:26:04.552 "assigned_rate_limits": { 00:26:04.552 "rw_ios_per_sec": 0, 00:26:04.552 "rw_mbytes_per_sec": 0, 00:26:04.552 "r_mbytes_per_sec": 0, 00:26:04.552 "w_mbytes_per_sec": 0 00:26:04.552 }, 00:26:04.552 "claimed": true, 00:26:04.552 "claim_type": "exclusive_write", 00:26:04.552 "zoned": false, 00:26:04.552 "supported_io_types": { 00:26:04.552 "read": true, 00:26:04.552 "write": true, 00:26:04.552 "unmap": true, 00:26:04.552 "write_zeroes": true, 00:26:04.552 "flush": true, 00:26:04.552 "reset": true, 00:26:04.552 "compare": false, 00:26:04.552 "compare_and_write": false, 00:26:04.552 "abort": true, 00:26:04.552 "nvme_admin": false, 00:26:04.552 "nvme_io": false 00:26:04.552 }, 00:26:04.552 "memory_domains": [ 00:26:04.552 { 00:26:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.552 "dma_device_type": 2 00:26:04.552 } 00:26:04.552 ], 00:26:04.552 "driver_specific": {} 00:26:04.552 } 00:26:04.552 ] 00:26:04.552 05:08:34 -- common/autotest_common.sh@895 -- # return 0 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.552 05:08:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.811 05:08:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.811 "name": "Existed_Raid", 00:26:04.811 "uuid": "1fb71b1e-1ecb-4d53-8f71-2d8de121f98d", 00:26:04.811 "strip_size_kb": 64, 00:26:04.811 "state": "configuring", 00:26:04.811 "raid_level": "raid5f", 00:26:04.811 "superblock": true, 00:26:04.811 "num_base_bdevs": 4, 00:26:04.811 "num_base_bdevs_discovered": 1, 00:26:04.811 "num_base_bdevs_operational": 4, 00:26:04.811 "base_bdevs_list": [ 00:26:04.811 { 00:26:04.811 "name": "BaseBdev1", 00:26:04.811 "uuid": "0a3a9e4e-fc19-4412-8ab1-a4989c76706d", 00:26:04.811 "is_configured": true, 00:26:04.811 "data_offset": 2048, 00:26:04.811 "data_size": 63488 00:26:04.811 }, 00:26:04.811 { 00:26:04.811 "name": "BaseBdev2", 00:26:04.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.811 "is_configured": false, 00:26:04.811 "data_offset": 0, 00:26:04.811 "data_size": 0 00:26:04.811 }, 00:26:04.811 { 00:26:04.811 "name": "BaseBdev3", 00:26:04.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.811 "is_configured": false, 00:26:04.811 "data_offset": 0, 00:26:04.811 "data_size": 0 00:26:04.811 }, 00:26:04.811 { 00:26:04.811 "name": "BaseBdev4", 00:26:04.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.811 "is_configured": false, 00:26:04.811 "data_offset": 0, 00:26:04.811 "data_size": 0 00:26:04.811 } 00:26:04.811 ] 00:26:04.811 }' 00:26:04.811 05:08:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.811 05:08:34 -- common/autotest_common.sh@10 -- # set +x 00:26:05.749 05:08:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:05.749 [2024-04-27 05:08:35.557932] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.749 [2024-04-27 05:08:35.558038] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:05.749 05:08:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:05.749 05:08:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:06.008 05:08:35 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:06.266 BaseBdev1 00:26:06.266 05:08:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:06.266 05:08:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:26:06.266 05:08:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:06.266 05:08:36 -- common/autotest_common.sh@889 -- # local i 00:26:06.266 05:08:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:06.266 05:08:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:06.266 05:08:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:06.524 05:08:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.782 [ 00:26:06.782 { 00:26:06.782 "name": "BaseBdev1", 00:26:06.782 "aliases": [ 00:26:06.782 "16d7cba6-bb2a-4587-bc83-19ef858b9c6c" 00:26:06.782 ], 00:26:06.782 "product_name": "Malloc disk", 00:26:06.782 "block_size": 512, 00:26:06.782 "num_blocks": 65536, 00:26:06.782 "uuid": "16d7cba6-bb2a-4587-bc83-19ef858b9c6c", 00:26:06.782 "assigned_rate_limits": { 00:26:06.782 "rw_ios_per_sec": 0, 00:26:06.782 "rw_mbytes_per_sec": 0, 00:26:06.782 "r_mbytes_per_sec": 0, 00:26:06.782 "w_mbytes_per_sec": 0 00:26:06.782 }, 00:26:06.782 "claimed": false, 00:26:06.782 "zoned": false, 00:26:06.782 "supported_io_types": { 00:26:06.782 "read": true, 00:26:06.782 "write": true, 00:26:06.782 "unmap": true, 00:26:06.782 "write_zeroes": true, 00:26:06.782 "flush": true, 00:26:06.782 "reset": true, 00:26:06.782 "compare": false, 00:26:06.782 "compare_and_write": false, 00:26:06.782 "abort": true, 00:26:06.782 "nvme_admin": false, 00:26:06.782 "nvme_io": false 00:26:06.782 }, 00:26:06.782 "memory_domains": [ 00:26:06.782 { 00:26:06.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.782 "dma_device_type": 2 00:26:06.782 } 00:26:06.782 ], 00:26:06.782 "driver_specific": {} 00:26:06.782 } 00:26:06.782 ] 00:26:06.782 05:08:36 -- common/autotest_common.sh@895 -- # return 0 00:26:06.782 05:08:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:07.078 [2024-04-27 05:08:36.853665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.078 [2024-04-27 05:08:36.856140] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:07.078 [2024-04-27 05:08:36.856239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:07.078 [2024-04-27 05:08:36.856255] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:07.078 [2024-04-27 05:08:36.856285] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:07.078 [2024-04-27 05:08:36.856295] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:07.078 [2024-04-27 05:08:36.856315] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:07.078 05:08:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:07.079 05:08:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:07.079 05:08:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:07.079 05:08:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:07.079 05:08:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.079 05:08:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.341 05:08:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:07.341 "name": "Existed_Raid", 00:26:07.341 "uuid": "cb109489-c042-42d7-afa4-af6d19a236fa", 00:26:07.341 "strip_size_kb": 64, 00:26:07.341 "state": "configuring", 00:26:07.341 "raid_level": "raid5f", 00:26:07.341 "superblock": true, 00:26:07.341 "num_base_bdevs": 4, 00:26:07.341 "num_base_bdevs_discovered": 1, 00:26:07.341 "num_base_bdevs_operational": 4, 00:26:07.341 "base_bdevs_list": [ 00:26:07.341 { 00:26:07.341 "name": "BaseBdev1", 00:26:07.341 "uuid": "16d7cba6-bb2a-4587-bc83-19ef858b9c6c", 00:26:07.341 "is_configured": true, 00:26:07.341 "data_offset": 2048, 00:26:07.341 "data_size": 63488 00:26:07.341 }, 00:26:07.341 { 00:26:07.341 "name": "BaseBdev2", 00:26:07.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.342 "is_configured": false, 00:26:07.342 "data_offset": 0, 00:26:07.342 "data_size": 0 00:26:07.342 }, 00:26:07.342 { 00:26:07.342 "name": "BaseBdev3", 00:26:07.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.342 "is_configured": false, 00:26:07.342 "data_offset": 0, 00:26:07.342 "data_size": 0 00:26:07.342 }, 00:26:07.342 { 00:26:07.342 "name": "BaseBdev4", 00:26:07.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.342 "is_configured": false, 00:26:07.342 "data_offset": 0, 00:26:07.342 "data_size": 0 00:26:07.342 } 00:26:07.342 ] 00:26:07.342 }' 00:26:07.342 05:08:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:07.342 05:08:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.909 05:08:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:08.168 [2024-04-27 05:08:38.029986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.168 BaseBdev2 00:26:08.168 05:08:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:08.168 05:08:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:26:08.168 05:08:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:08.168 05:08:38 -- common/autotest_common.sh@889 -- # local i 00:26:08.168 05:08:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:08.168 05:08:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:08.168 05:08:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.426 05:08:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:08.685 [ 00:26:08.685 { 00:26:08.685 "name": "BaseBdev2", 00:26:08.685 "aliases": [ 00:26:08.685 "189c81fa-6296-43b4-b528-fb76624738f0" 00:26:08.685 ], 00:26:08.685 "product_name": "Malloc disk", 00:26:08.685 "block_size": 512, 00:26:08.685 "num_blocks": 65536, 00:26:08.685 "uuid": "189c81fa-6296-43b4-b528-fb76624738f0", 00:26:08.685 "assigned_rate_limits": { 00:26:08.685 "rw_ios_per_sec": 0, 00:26:08.685 "rw_mbytes_per_sec": 0, 00:26:08.685 "r_mbytes_per_sec": 0, 00:26:08.685 "w_mbytes_per_sec": 0 00:26:08.685 }, 00:26:08.685 "claimed": true, 00:26:08.685 "claim_type": "exclusive_write", 00:26:08.685 "zoned": false, 00:26:08.685 "supported_io_types": { 00:26:08.685 "read": true, 00:26:08.685 "write": true, 00:26:08.685 "unmap": true, 00:26:08.685 "write_zeroes": true, 00:26:08.685 "flush": true, 00:26:08.685 "reset": true, 00:26:08.685 "compare": false, 00:26:08.685 "compare_and_write": false, 00:26:08.685 "abort": true, 00:26:08.685 "nvme_admin": false, 00:26:08.685 "nvme_io": false 00:26:08.685 }, 00:26:08.685 "memory_domains": [ 00:26:08.685 { 00:26:08.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.685 "dma_device_type": 2 00:26:08.685 } 00:26:08.685 ], 00:26:08.685 "driver_specific": {} 00:26:08.685 } 00:26:08.685 ] 00:26:08.685 05:08:38 -- common/autotest_common.sh@895 -- # return 0 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.685 05:08:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.943 05:08:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.943 "name": "Existed_Raid", 00:26:08.943 "uuid": "cb109489-c042-42d7-afa4-af6d19a236fa", 00:26:08.943 "strip_size_kb": 64, 00:26:08.943 "state": "configuring", 00:26:08.943 "raid_level": "raid5f", 00:26:08.943 "superblock": true, 00:26:08.943 "num_base_bdevs": 4, 00:26:08.943 "num_base_bdevs_discovered": 2, 00:26:08.943 "num_base_bdevs_operational": 4, 00:26:08.943 "base_bdevs_list": [ 00:26:08.943 { 00:26:08.943 "name": "BaseBdev1", 00:26:08.943 "uuid": "16d7cba6-bb2a-4587-bc83-19ef858b9c6c", 00:26:08.943 "is_configured": true, 00:26:08.943 "data_offset": 2048, 00:26:08.943 "data_size": 63488 00:26:08.943 }, 00:26:08.943 { 00:26:08.943 "name": "BaseBdev2", 00:26:08.943 "uuid": "189c81fa-6296-43b4-b528-fb76624738f0", 00:26:08.943 "is_configured": true, 00:26:08.943 "data_offset": 2048, 00:26:08.943 "data_size": 63488 00:26:08.943 }, 00:26:08.943 { 00:26:08.943 "name": "BaseBdev3", 00:26:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.943 "is_configured": false, 00:26:08.943 "data_offset": 0, 00:26:08.943 "data_size": 0 00:26:08.943 }, 00:26:08.943 { 00:26:08.943 "name": "BaseBdev4", 00:26:08.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.943 "is_configured": false, 00:26:08.943 "data_offset": 0, 00:26:08.943 "data_size": 0 00:26:08.943 } 00:26:08.943 ] 00:26:08.943 }' 00:26:08.943 05:08:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.943 05:08:38 -- common/autotest_common.sh@10 -- # set +x 00:26:09.877 05:08:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:09.877 [2024-04-27 05:08:39.727183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.877 BaseBdev3 00:26:09.877 05:08:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:09.877 05:08:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:26:09.877 05:08:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:09.877 05:08:39 -- common/autotest_common.sh@889 -- # local i 00:26:09.877 05:08:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:09.877 05:08:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:09.877 05:08:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:10.136 05:08:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:10.395 [ 00:26:10.395 { 00:26:10.395 "name": "BaseBdev3", 00:26:10.395 "aliases": [ 00:26:10.395 "9ee1137b-2a05-4832-aa41-aa31b9e18263" 00:26:10.395 ], 00:26:10.395 "product_name": "Malloc disk", 00:26:10.395 "block_size": 512, 00:26:10.395 "num_blocks": 65536, 00:26:10.395 "uuid": "9ee1137b-2a05-4832-aa41-aa31b9e18263", 00:26:10.395 "assigned_rate_limits": { 00:26:10.395 "rw_ios_per_sec": 0, 00:26:10.395 "rw_mbytes_per_sec": 0, 00:26:10.395 "r_mbytes_per_sec": 0, 00:26:10.395 "w_mbytes_per_sec": 0 00:26:10.395 }, 00:26:10.395 "claimed": true, 00:26:10.395 "claim_type": "exclusive_write", 00:26:10.395 "zoned": false, 00:26:10.395 "supported_io_types": { 00:26:10.395 "read": true, 00:26:10.395 "write": true, 00:26:10.395 "unmap": true, 00:26:10.395 "write_zeroes": true, 00:26:10.395 "flush": true, 00:26:10.395 "reset": true, 00:26:10.395 "compare": false, 00:26:10.395 "compare_and_write": false, 00:26:10.395 "abort": true, 00:26:10.395 "nvme_admin": false, 00:26:10.395 "nvme_io": false 00:26:10.395 }, 00:26:10.395 "memory_domains": [ 00:26:10.395 { 00:26:10.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.395 "dma_device_type": 2 00:26:10.396 } 00:26:10.396 ], 00:26:10.396 "driver_specific": {} 00:26:10.396 } 00:26:10.396 ] 00:26:10.396 05:08:40 -- common/autotest_common.sh@895 -- # return 0 00:26:10.396 05:08:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:10.396 05:08:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:10.396 05:08:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.655 "name": "Existed_Raid", 00:26:10.655 "uuid": "cb109489-c042-42d7-afa4-af6d19a236fa", 00:26:10.655 "strip_size_kb": 64, 00:26:10.655 "state": "configuring", 00:26:10.655 "raid_level": "raid5f", 00:26:10.655 "superblock": true, 00:26:10.655 "num_base_bdevs": 4, 00:26:10.655 "num_base_bdevs_discovered": 3, 00:26:10.655 "num_base_bdevs_operational": 4, 00:26:10.655 "base_bdevs_list": [ 00:26:10.655 { 00:26:10.655 "name": "BaseBdev1", 00:26:10.655 "uuid": "16d7cba6-bb2a-4587-bc83-19ef858b9c6c", 00:26:10.655 "is_configured": true, 00:26:10.655 "data_offset": 2048, 00:26:10.655 "data_size": 63488 00:26:10.655 }, 00:26:10.655 { 00:26:10.655 "name": "BaseBdev2", 00:26:10.655 "uuid": "189c81fa-6296-43b4-b528-fb76624738f0", 00:26:10.655 "is_configured": true, 00:26:10.655 "data_offset": 2048, 00:26:10.655 "data_size": 63488 00:26:10.655 }, 00:26:10.655 { 00:26:10.655 "name": "BaseBdev3", 00:26:10.655 "uuid": "9ee1137b-2a05-4832-aa41-aa31b9e18263", 00:26:10.655 "is_configured": true, 00:26:10.655 "data_offset": 2048, 00:26:10.655 "data_size": 63488 00:26:10.655 }, 00:26:10.655 { 00:26:10.655 "name": "BaseBdev4", 00:26:10.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.655 "is_configured": false, 00:26:10.655 "data_offset": 0, 00:26:10.655 "data_size": 0 00:26:10.655 } 00:26:10.655 ] 00:26:10.655 }' 00:26:10.655 05:08:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.655 05:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:11.592 05:08:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:11.592 [2024-04-27 05:08:41.498446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:11.592 [2024-04-27 05:08:41.498765] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:26:11.592 [2024-04-27 05:08:41.498783] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:11.592 [2024-04-27 05:08:41.498956] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:11.592 [2024-04-27 05:08:41.499879] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:26:11.592 [2024-04-27 05:08:41.499907] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:26:11.592 [2024-04-27 05:08:41.500098] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.592 BaseBdev4 00:26:11.851 05:08:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:26:11.851 05:08:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:26:11.851 05:08:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:26:11.851 05:08:41 -- common/autotest_common.sh@889 -- # local i 00:26:11.851 05:08:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:26:11.851 05:08:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:26:11.851 05:08:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.110 05:08:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:12.370 [ 00:26:12.370 { 00:26:12.370 "name": "BaseBdev4", 00:26:12.370 "aliases": [ 00:26:12.370 "c69db14b-c8a9-4883-bebd-9c4b96a30809" 00:26:12.370 ], 00:26:12.370 "product_name": "Malloc disk", 00:26:12.370 "block_size": 512, 00:26:12.370 "num_blocks": 65536, 00:26:12.370 "uuid": "c69db14b-c8a9-4883-bebd-9c4b96a30809", 00:26:12.370 "assigned_rate_limits": { 00:26:12.370 "rw_ios_per_sec": 0, 00:26:12.370 "rw_mbytes_per_sec": 0, 00:26:12.370 "r_mbytes_per_sec": 0, 00:26:12.370 "w_mbytes_per_sec": 0 00:26:12.370 }, 00:26:12.370 "claimed": true, 00:26:12.370 "claim_type": "exclusive_write", 00:26:12.370 "zoned": false, 00:26:12.370 "supported_io_types": { 00:26:12.370 "read": true, 00:26:12.370 "write": true, 00:26:12.370 "unmap": true, 00:26:12.370 "write_zeroes": true, 00:26:12.370 "flush": true, 00:26:12.370 "reset": true, 00:26:12.370 "compare": false, 00:26:12.370 "compare_and_write": false, 00:26:12.370 "abort": true, 00:26:12.370 "nvme_admin": false, 00:26:12.370 "nvme_io": false 00:26:12.370 }, 00:26:12.370 "memory_domains": [ 00:26:12.370 { 00:26:12.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.370 "dma_device_type": 2 00:26:12.370 } 00:26:12.370 ], 00:26:12.370 "driver_specific": {} 00:26:12.370 } 00:26:12.370 ] 00:26:12.370 05:08:42 -- common/autotest_common.sh@895 -- # return 0 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:12.370 05:08:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.371 05:08:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.371 05:08:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.371 05:08:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.371 05:08:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.371 05:08:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.630 05:08:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:12.630 "name": "Existed_Raid", 00:26:12.630 "uuid": "cb109489-c042-42d7-afa4-af6d19a236fa", 00:26:12.630 "strip_size_kb": 64, 00:26:12.630 "state": "online", 00:26:12.630 "raid_level": "raid5f", 00:26:12.630 "superblock": true, 00:26:12.630 "num_base_bdevs": 4, 00:26:12.630 "num_base_bdevs_discovered": 4, 00:26:12.630 "num_base_bdevs_operational": 4, 00:26:12.630 "base_bdevs_list": [ 00:26:12.630 { 00:26:12.630 "name": "BaseBdev1", 00:26:12.630 "uuid": "16d7cba6-bb2a-4587-bc83-19ef858b9c6c", 00:26:12.630 "is_configured": true, 00:26:12.630 "data_offset": 2048, 00:26:12.630 "data_size": 63488 00:26:12.630 }, 00:26:12.630 { 00:26:12.630 "name": "BaseBdev2", 00:26:12.630 "uuid": "189c81fa-6296-43b4-b528-fb76624738f0", 00:26:12.630 "is_configured": true, 00:26:12.630 "data_offset": 2048, 00:26:12.630 "data_size": 63488 00:26:12.630 }, 00:26:12.630 { 00:26:12.630 "name": "BaseBdev3", 00:26:12.630 "uuid": "9ee1137b-2a05-4832-aa41-aa31b9e18263", 00:26:12.630 "is_configured": true, 00:26:12.630 "data_offset": 2048, 00:26:12.630 "data_size": 63488 00:26:12.630 }, 00:26:12.630 { 00:26:12.630 "name": "BaseBdev4", 00:26:12.630 "uuid": "c69db14b-c8a9-4883-bebd-9c4b96a30809", 00:26:12.630 "is_configured": true, 00:26:12.630 "data_offset": 2048, 00:26:12.630 "data_size": 63488 00:26:12.630 } 00:26:12.630 ] 00:26:12.630 }' 00:26:12.630 05:08:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:12.630 05:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:13.196 05:08:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:13.453 [2024-04-27 05:08:43.179866] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.453 05:08:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.710 05:08:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.710 "name": "Existed_Raid", 00:26:13.710 "uuid": "cb109489-c042-42d7-afa4-af6d19a236fa", 00:26:13.710 "strip_size_kb": 64, 00:26:13.710 "state": "online", 00:26:13.710 "raid_level": "raid5f", 00:26:13.710 "superblock": true, 00:26:13.710 "num_base_bdevs": 4, 00:26:13.710 "num_base_bdevs_discovered": 3, 00:26:13.710 "num_base_bdevs_operational": 3, 00:26:13.710 "base_bdevs_list": [ 00:26:13.710 { 00:26:13.710 "name": null, 00:26:13.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.710 "is_configured": false, 00:26:13.710 "data_offset": 2048, 00:26:13.710 "data_size": 63488 00:26:13.710 }, 00:26:13.710 { 00:26:13.710 "name": "BaseBdev2", 00:26:13.710 "uuid": "189c81fa-6296-43b4-b528-fb76624738f0", 00:26:13.710 "is_configured": true, 00:26:13.710 "data_offset": 2048, 00:26:13.710 "data_size": 63488 00:26:13.710 }, 00:26:13.710 { 00:26:13.710 "name": "BaseBdev3", 00:26:13.710 "uuid": "9ee1137b-2a05-4832-aa41-aa31b9e18263", 00:26:13.710 "is_configured": true, 00:26:13.710 "data_offset": 2048, 00:26:13.710 "data_size": 63488 00:26:13.710 }, 00:26:13.710 { 00:26:13.710 "name": "BaseBdev4", 00:26:13.710 "uuid": "c69db14b-c8a9-4883-bebd-9c4b96a30809", 00:26:13.710 "is_configured": true, 00:26:13.710 "data_offset": 2048, 00:26:13.710 "data_size": 63488 00:26:13.710 } 00:26:13.710 ] 00:26:13.710 }' 00:26:13.710 05:08:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.710 05:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:14.277 05:08:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:14.277 05:08:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:14.277 05:08:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.277 05:08:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:14.535 05:08:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:14.535 05:08:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:14.535 05:08:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:14.794 [2024-04-27 05:08:44.632212] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:14.794 [2024-04-27 05:08:44.632544] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.794 [2024-04-27 05:08:44.632809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.794 05:08:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:14.794 05:08:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:14.794 05:08:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.794 05:08:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:15.053 05:08:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:15.053 05:08:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:15.053 05:08:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:15.311 [2024-04-27 05:08:45.197168] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:15.569 05:08:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:15.828 [2024-04-27 05:08:45.693625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:15.828 [2024-04-27 05:08:45.693992] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:26:15.828 05:08:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:15.828 05:08:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:15.828 05:08:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.828 05:08:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:16.087 05:08:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:16.087 05:08:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:16.087 05:08:45 -- bdev/bdev_raid.sh@287 -- # killprocess 142313 00:26:16.087 05:08:45 -- common/autotest_common.sh@926 -- # '[' -z 142313 ']' 00:26:16.087 05:08:45 -- common/autotest_common.sh@930 -- # kill -0 142313 00:26:16.087 05:08:45 -- common/autotest_common.sh@931 -- # uname 00:26:16.087 05:08:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.087 05:08:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142313 00:26:16.345 05:08:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.345 05:08:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.345 05:08:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142313' 00:26:16.345 killing process with pid 142313 00:26:16.345 05:08:46 -- common/autotest_common.sh@945 -- # kill 142313 00:26:16.345 05:08:46 -- common/autotest_common.sh@950 -- # wait 142313 00:26:16.345 [2024-04-27 05:08:46.005078] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:16.345 [2024-04-27 05:08:46.005181] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:16.604 00:26:16.604 real 0m15.445s 00:26:16.604 user 0m28.284s 00:26:16.604 sys 0m2.111s 00:26:16.604 05:08:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.604 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 ************************************ 00:26:16.604 END TEST raid5f_state_function_test_sb 00:26:16.604 ************************************ 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:26:16.604 05:08:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:16.604 05:08:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.604 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 ************************************ 00:26:16.604 START TEST raid5f_superblock_test 00:26:16.604 ************************************ 00:26:16.604 05:08:46 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=142769 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:16.604 05:08:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 142769 /var/tmp/spdk-raid.sock 00:26:16.604 05:08:46 -- common/autotest_common.sh@819 -- # '[' -z 142769 ']' 00:26:16.604 05:08:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:16.604 05:08:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.604 05:08:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:16.604 05:08:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.604 05:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 [2024-04-27 05:08:46.475446] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:16.604 [2024-04-27 05:08:46.475933] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142769 ] 00:26:16.863 [2024-04-27 05:08:46.636660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.863 [2024-04-27 05:08:46.761522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.122 [2024-04-27 05:08:46.839957] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.690 05:08:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.690 05:08:47 -- common/autotest_common.sh@852 -- # return 0 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:17.690 05:08:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:17.949 malloc1 00:26:17.949 05:08:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:18.208 [2024-04-27 05:08:47.951867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:18.208 [2024-04-27 05:08:47.952318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:18.208 [2024-04-27 05:08:47.952427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:18.208 [2024-04-27 05:08:47.952764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:18.208 [2024-04-27 05:08:47.955884] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:18.208 [2024-04-27 05:08:47.956069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:18.208 pt1 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:18.208 05:08:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:18.467 malloc2 00:26:18.467 05:08:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:18.725 [2024-04-27 05:08:48.511681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:18.725 [2024-04-27 05:08:48.511963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:18.725 [2024-04-27 05:08:48.512163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:18.725 [2024-04-27 05:08:48.512362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:18.725 [2024-04-27 05:08:48.515357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:18.725 [2024-04-27 05:08:48.515536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:18.725 pt2 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:18.725 05:08:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:18.984 malloc3 00:26:18.984 05:08:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:19.243 [2024-04-27 05:08:49.109869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:19.243 [2024-04-27 05:08:49.110278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.243 [2024-04-27 05:08:49.110470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:19.243 [2024-04-27 05:08:49.110631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.243 [2024-04-27 05:08:49.113523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.243 [2024-04-27 05:08:49.113731] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:19.243 pt3 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:19.243 05:08:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:19.502 malloc4 00:26:19.502 05:08:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:19.760 [2024-04-27 05:08:49.606223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:19.760 [2024-04-27 05:08:49.606635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.760 [2024-04-27 05:08:49.606816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:19.760 [2024-04-27 05:08:49.606977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.760 [2024-04-27 05:08:49.609942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.760 [2024-04-27 05:08:49.610133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:19.760 pt4 00:26:19.760 05:08:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:19.760 05:08:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:19.760 05:08:49 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:20.019 [2024-04-27 05:08:49.854726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:20.019 [2024-04-27 05:08:49.857584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:20.020 [2024-04-27 05:08:49.857864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:20.020 [2024-04-27 05:08:49.858111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:20.020 [2024-04-27 05:08:49.858547] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:26:20.020 [2024-04-27 05:08:49.858681] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:20.020 [2024-04-27 05:08:49.858958] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:20.020 [2024-04-27 05:08:49.860051] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:26:20.020 [2024-04-27 05:08:49.860186] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:26:20.020 [2024-04-27 05:08:49.860571] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.020 05:08:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.279 05:08:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:20.279 "name": "raid_bdev1", 00:26:20.279 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:20.279 "strip_size_kb": 64, 00:26:20.279 "state": "online", 00:26:20.279 "raid_level": "raid5f", 00:26:20.279 "superblock": true, 00:26:20.279 "num_base_bdevs": 4, 00:26:20.279 "num_base_bdevs_discovered": 4, 00:26:20.279 "num_base_bdevs_operational": 4, 00:26:20.279 "base_bdevs_list": [ 00:26:20.279 { 00:26:20.279 "name": "pt1", 00:26:20.279 "uuid": "51328a16-eb80-5303-928d-53105c0e723b", 00:26:20.279 "is_configured": true, 00:26:20.279 "data_offset": 2048, 00:26:20.279 "data_size": 63488 00:26:20.279 }, 00:26:20.279 { 00:26:20.279 "name": "pt2", 00:26:20.279 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:20.279 "is_configured": true, 00:26:20.279 "data_offset": 2048, 00:26:20.279 "data_size": 63488 00:26:20.279 }, 00:26:20.279 { 00:26:20.279 "name": "pt3", 00:26:20.279 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:20.279 "is_configured": true, 00:26:20.279 "data_offset": 2048, 00:26:20.279 "data_size": 63488 00:26:20.279 }, 00:26:20.279 { 00:26:20.279 "name": "pt4", 00:26:20.279 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:20.279 "is_configured": true, 00:26:20.279 "data_offset": 2048, 00:26:20.279 "data_size": 63488 00:26:20.279 } 00:26:20.279 ] 00:26:20.279 }' 00:26:20.279 05:08:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:20.279 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 05:08:50 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:20.879 05:08:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:21.138 [2024-04-27 05:08:50.980477] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.138 05:08:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=dc6a77c5-6312-417d-a777-cd6278f5868c 00:26:21.138 05:08:51 -- bdev/bdev_raid.sh@380 -- # '[' -z dc6a77c5-6312-417d-a777-cd6278f5868c ']' 00:26:21.138 05:08:51 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:21.396 [2024-04-27 05:08:51.276355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:21.396 [2024-04-27 05:08:51.276700] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.396 [2024-04-27 05:08:51.276983] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.396 [2024-04-27 05:08:51.277214] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.396 [2024-04-27 05:08:51.277342] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:26:21.396 05:08:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:21.396 05:08:51 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:21.963 05:08:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:22.221 05:08:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:22.221 05:08:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:22.478 05:08:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:22.478 05:08:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:22.736 05:08:52 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:22.736 05:08:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:22.994 05:08:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:22.994 05:08:52 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:22.994 05:08:52 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.994 05:08:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:22.994 05:08:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.994 05:08:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.994 05:08:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.994 05:08:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.994 05:08:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.994 05:08:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.994 05:08:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.994 05:08:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:22.994 05:08:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:23.252 [2024-04-27 05:08:53.072746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:23.252 [2024-04-27 05:08:53.075595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:23.252 [2024-04-27 05:08:53.075830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:23.252 [2024-04-27 05:08:53.075922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:23.252 [2024-04-27 05:08:53.076136] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:23.252 [2024-04-27 05:08:53.076351] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:23.252 [2024-04-27 05:08:53.076510] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:23.252 [2024-04-27 05:08:53.076643] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:26:23.252 [2024-04-27 05:08:53.076780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:23.252 [2024-04-27 05:08:53.076831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:26:23.252 request: 00:26:23.252 { 00:26:23.252 "name": "raid_bdev1", 00:26:23.252 "raid_level": "raid5f", 00:26:23.252 "base_bdevs": [ 00:26:23.252 "malloc1", 00:26:23.252 "malloc2", 00:26:23.252 "malloc3", 00:26:23.252 "malloc4" 00:26:23.252 ], 00:26:23.252 "superblock": false, 00:26:23.252 "strip_size_kb": 64, 00:26:23.252 "method": "bdev_raid_create", 00:26:23.252 "req_id": 1 00:26:23.252 } 00:26:23.252 Got JSON-RPC error response 00:26:23.252 response: 00:26:23.252 { 00:26:23.252 "code": -17, 00:26:23.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:23.252 } 00:26:23.252 05:08:53 -- common/autotest_common.sh@643 -- # es=1 00:26:23.252 05:08:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.252 05:08:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.252 05:08:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.252 05:08:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.252 05:08:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:23.510 05:08:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:23.510 05:08:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:23.510 05:08:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:23.768 [2024-04-27 05:08:53.597279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:23.768 [2024-04-27 05:08:53.597699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.769 [2024-04-27 05:08:53.597786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:23.769 [2024-04-27 05:08:53.597950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.769 [2024-04-27 05:08:53.600901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.769 [2024-04-27 05:08:53.601100] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:23.769 [2024-04-27 05:08:53.601374] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:23.769 [2024-04-27 05:08:53.601576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:23.769 pt1 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.769 05:08:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.027 05:08:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:24.027 "name": "raid_bdev1", 00:26:24.027 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:24.027 "strip_size_kb": 64, 00:26:24.027 "state": "configuring", 00:26:24.027 "raid_level": "raid5f", 00:26:24.027 "superblock": true, 00:26:24.027 "num_base_bdevs": 4, 00:26:24.027 "num_base_bdevs_discovered": 1, 00:26:24.027 "num_base_bdevs_operational": 4, 00:26:24.027 "base_bdevs_list": [ 00:26:24.027 { 00:26:24.027 "name": "pt1", 00:26:24.027 "uuid": "51328a16-eb80-5303-928d-53105c0e723b", 00:26:24.027 "is_configured": true, 00:26:24.027 "data_offset": 2048, 00:26:24.027 "data_size": 63488 00:26:24.027 }, 00:26:24.027 { 00:26:24.027 "name": null, 00:26:24.027 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:24.027 "is_configured": false, 00:26:24.027 "data_offset": 2048, 00:26:24.027 "data_size": 63488 00:26:24.027 }, 00:26:24.027 { 00:26:24.027 "name": null, 00:26:24.027 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:24.027 "is_configured": false, 00:26:24.027 "data_offset": 2048, 00:26:24.027 "data_size": 63488 00:26:24.027 }, 00:26:24.027 { 00:26:24.027 "name": null, 00:26:24.027 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:24.027 "is_configured": false, 00:26:24.027 "data_offset": 2048, 00:26:24.027 "data_size": 63488 00:26:24.027 } 00:26:24.027 ] 00:26:24.027 }' 00:26:24.027 05:08:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:24.027 05:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:24.963 05:08:54 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:26:24.963 05:08:54 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:24.963 [2024-04-27 05:08:54.769809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:24.963 [2024-04-27 05:08:54.770126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.963 [2024-04-27 05:08:54.770311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:24.963 [2024-04-27 05:08:54.770468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.963 [2024-04-27 05:08:54.771149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.963 [2024-04-27 05:08:54.771333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:24.963 [2024-04-27 05:08:54.771579] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:24.963 [2024-04-27 05:08:54.771726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:24.963 pt2 00:26:24.963 05:08:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:25.221 [2024-04-27 05:08:55.045940] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.221 05:08:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.499 05:08:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:25.499 "name": "raid_bdev1", 00:26:25.499 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:25.499 "strip_size_kb": 64, 00:26:25.499 "state": "configuring", 00:26:25.499 "raid_level": "raid5f", 00:26:25.499 "superblock": true, 00:26:25.499 "num_base_bdevs": 4, 00:26:25.499 "num_base_bdevs_discovered": 1, 00:26:25.499 "num_base_bdevs_operational": 4, 00:26:25.499 "base_bdevs_list": [ 00:26:25.499 { 00:26:25.499 "name": "pt1", 00:26:25.499 "uuid": "51328a16-eb80-5303-928d-53105c0e723b", 00:26:25.499 "is_configured": true, 00:26:25.499 "data_offset": 2048, 00:26:25.499 "data_size": 63488 00:26:25.499 }, 00:26:25.499 { 00:26:25.499 "name": null, 00:26:25.499 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:25.499 "is_configured": false, 00:26:25.499 "data_offset": 2048, 00:26:25.499 "data_size": 63488 00:26:25.499 }, 00:26:25.499 { 00:26:25.499 "name": null, 00:26:25.499 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:25.499 "is_configured": false, 00:26:25.499 "data_offset": 2048, 00:26:25.499 "data_size": 63488 00:26:25.499 }, 00:26:25.499 { 00:26:25.499 "name": null, 00:26:25.500 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:25.500 "is_configured": false, 00:26:25.500 "data_offset": 2048, 00:26:25.500 "data_size": 63488 00:26:25.500 } 00:26:25.500 ] 00:26:25.500 }' 00:26:25.500 05:08:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:25.500 05:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:26.082 05:08:55 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:26.082 05:08:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:26.082 05:08:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:26.341 [2024-04-27 05:08:56.166218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:26.341 [2024-04-27 05:08:56.166501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.341 [2024-04-27 05:08:56.166597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:26.341 [2024-04-27 05:08:56.166861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.341 [2024-04-27 05:08:56.167473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.341 [2024-04-27 05:08:56.167573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:26.341 [2024-04-27 05:08:56.167711] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:26.341 [2024-04-27 05:08:56.167771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:26.341 pt2 00:26:26.341 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:26.341 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:26.341 05:08:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:26.599 [2024-04-27 05:08:56.402278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:26.599 [2024-04-27 05:08:56.402703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.599 [2024-04-27 05:08:56.402794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:26.599 [2024-04-27 05:08:56.402978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.599 [2024-04-27 05:08:56.403589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.599 [2024-04-27 05:08:56.403771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:26.599 [2024-04-27 05:08:56.403988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:26.599 [2024-04-27 05:08:56.404128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:26.599 pt3 00:26:26.599 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:26.599 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:26.599 05:08:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:26.858 [2024-04-27 05:08:56.678348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:26.858 [2024-04-27 05:08:56.678757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.858 [2024-04-27 05:08:56.678850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:26.858 [2024-04-27 05:08:56.679094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.858 [2024-04-27 05:08:56.679686] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.858 [2024-04-27 05:08:56.679861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:26.858 [2024-04-27 05:08:56.680085] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:26.859 [2024-04-27 05:08:56.680228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:26.859 [2024-04-27 05:08:56.680476] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:26:26.859 [2024-04-27 05:08:56.680613] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:26.859 [2024-04-27 05:08:56.680741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:26.859 [2024-04-27 05:08:56.681571] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:26:26.859 [2024-04-27 05:08:56.681702] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:26:26.859 [2024-04-27 05:08:56.681929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.859 pt4 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.859 05:08:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.117 05:08:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:27.117 "name": "raid_bdev1", 00:26:27.117 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:27.117 "strip_size_kb": 64, 00:26:27.117 "state": "online", 00:26:27.117 "raid_level": "raid5f", 00:26:27.117 "superblock": true, 00:26:27.117 "num_base_bdevs": 4, 00:26:27.117 "num_base_bdevs_discovered": 4, 00:26:27.117 "num_base_bdevs_operational": 4, 00:26:27.117 "base_bdevs_list": [ 00:26:27.117 { 00:26:27.117 "name": "pt1", 00:26:27.117 "uuid": "51328a16-eb80-5303-928d-53105c0e723b", 00:26:27.117 "is_configured": true, 00:26:27.117 "data_offset": 2048, 00:26:27.117 "data_size": 63488 00:26:27.117 }, 00:26:27.117 { 00:26:27.117 "name": "pt2", 00:26:27.117 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:27.117 "is_configured": true, 00:26:27.117 "data_offset": 2048, 00:26:27.118 "data_size": 63488 00:26:27.118 }, 00:26:27.118 { 00:26:27.118 "name": "pt3", 00:26:27.118 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:27.118 "is_configured": true, 00:26:27.118 "data_offset": 2048, 00:26:27.118 "data_size": 63488 00:26:27.118 }, 00:26:27.118 { 00:26:27.118 "name": "pt4", 00:26:27.118 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:27.118 "is_configured": true, 00:26:27.118 "data_offset": 2048, 00:26:27.118 "data_size": 63488 00:26:27.118 } 00:26:27.118 ] 00:26:27.118 }' 00:26:27.118 05:08:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:27.118 05:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:27.686 05:08:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:27.686 05:08:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:27.944 [2024-04-27 05:08:57.813071] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:27.944 05:08:57 -- bdev/bdev_raid.sh@430 -- # '[' dc6a77c5-6312-417d-a777-cd6278f5868c '!=' dc6a77c5-6312-417d-a777-cd6278f5868c ']' 00:26:27.944 05:08:57 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:26:27.944 05:08:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:27.944 05:08:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:27.944 05:08:57 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:28.202 [2024-04-27 05:08:58.093067] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.202 05:08:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.460 05:08:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.460 05:08:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.460 05:08:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.460 "name": "raid_bdev1", 00:26:28.460 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:28.460 "strip_size_kb": 64, 00:26:28.460 "state": "online", 00:26:28.460 "raid_level": "raid5f", 00:26:28.460 "superblock": true, 00:26:28.460 "num_base_bdevs": 4, 00:26:28.460 "num_base_bdevs_discovered": 3, 00:26:28.460 "num_base_bdevs_operational": 3, 00:26:28.460 "base_bdevs_list": [ 00:26:28.460 { 00:26:28.460 "name": null, 00:26:28.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.460 "is_configured": false, 00:26:28.460 "data_offset": 2048, 00:26:28.460 "data_size": 63488 00:26:28.460 }, 00:26:28.460 { 00:26:28.460 "name": "pt2", 00:26:28.460 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:28.460 "is_configured": true, 00:26:28.460 "data_offset": 2048, 00:26:28.460 "data_size": 63488 00:26:28.460 }, 00:26:28.460 { 00:26:28.460 "name": "pt3", 00:26:28.460 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:28.460 "is_configured": true, 00:26:28.460 "data_offset": 2048, 00:26:28.460 "data_size": 63488 00:26:28.460 }, 00:26:28.460 { 00:26:28.460 "name": "pt4", 00:26:28.460 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:28.460 "is_configured": true, 00:26:28.460 "data_offset": 2048, 00:26:28.460 "data_size": 63488 00:26:28.460 } 00:26:28.460 ] 00:26:28.460 }' 00:26:28.460 05:08:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.460 05:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:29.392 05:08:59 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:29.392 [2024-04-27 05:08:59.233289] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:29.392 [2024-04-27 05:08:59.233636] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:29.392 [2024-04-27 05:08:59.233867] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:29.392 [2024-04-27 05:08:59.234102] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:29.392 [2024-04-27 05:08:59.234232] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:26:29.392 05:08:59 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.392 05:08:59 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:29.650 05:08:59 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:29.650 05:08:59 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:29.650 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:29.650 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:29.650 05:08:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:29.908 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:29.908 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:29.908 05:08:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:30.166 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:30.166 05:08:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:30.166 05:08:59 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:30.435 05:09:00 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:30.435 05:09:00 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:30.435 05:09:00 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:30.435 05:09:00 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:30.435 05:09:00 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:30.706 [2024-04-27 05:09:00.445569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:30.706 [2024-04-27 05:09:00.446041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.706 [2024-04-27 05:09:00.446141] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:30.706 [2024-04-27 05:09:00.446312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.706 [2024-04-27 05:09:00.449575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.706 [2024-04-27 05:09:00.449793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:30.706 [2024-04-27 05:09:00.450030] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:30.706 [2024-04-27 05:09:00.450194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:30.706 pt2 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.706 05:09:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.964 05:09:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:30.964 "name": "raid_bdev1", 00:26:30.964 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:30.964 "strip_size_kb": 64, 00:26:30.964 "state": "configuring", 00:26:30.964 "raid_level": "raid5f", 00:26:30.964 "superblock": true, 00:26:30.964 "num_base_bdevs": 4, 00:26:30.964 "num_base_bdevs_discovered": 1, 00:26:30.964 "num_base_bdevs_operational": 3, 00:26:30.964 "base_bdevs_list": [ 00:26:30.964 { 00:26:30.964 "name": null, 00:26:30.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.964 "is_configured": false, 00:26:30.964 "data_offset": 2048, 00:26:30.964 "data_size": 63488 00:26:30.964 }, 00:26:30.964 { 00:26:30.964 "name": "pt2", 00:26:30.964 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:30.964 "is_configured": true, 00:26:30.964 "data_offset": 2048, 00:26:30.964 "data_size": 63488 00:26:30.964 }, 00:26:30.964 { 00:26:30.964 "name": null, 00:26:30.964 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:30.964 "is_configured": false, 00:26:30.964 "data_offset": 2048, 00:26:30.964 "data_size": 63488 00:26:30.964 }, 00:26:30.964 { 00:26:30.964 "name": null, 00:26:30.964 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:30.964 "is_configured": false, 00:26:30.964 "data_offset": 2048, 00:26:30.964 "data_size": 63488 00:26:30.964 } 00:26:30.964 ] 00:26:30.964 }' 00:26:30.964 05:09:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:30.964 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:31.530 05:09:01 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:31.530 05:09:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:31.530 05:09:01 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:31.789 [2024-04-27 05:09:01.522473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:31.789 [2024-04-27 05:09:01.522785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.789 [2024-04-27 05:09:01.522957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:31.789 [2024-04-27 05:09:01.523086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.789 [2024-04-27 05:09:01.523750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.789 [2024-04-27 05:09:01.523926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:31.789 [2024-04-27 05:09:01.524167] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:31.789 [2024-04-27 05:09:01.524308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:31.789 pt3 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.789 05:09:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.047 05:09:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:32.047 "name": "raid_bdev1", 00:26:32.047 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:32.047 "strip_size_kb": 64, 00:26:32.047 "state": "configuring", 00:26:32.047 "raid_level": "raid5f", 00:26:32.047 "superblock": true, 00:26:32.047 "num_base_bdevs": 4, 00:26:32.047 "num_base_bdevs_discovered": 2, 00:26:32.047 "num_base_bdevs_operational": 3, 00:26:32.047 "base_bdevs_list": [ 00:26:32.047 { 00:26:32.047 "name": null, 00:26:32.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.047 "is_configured": false, 00:26:32.047 "data_offset": 2048, 00:26:32.047 "data_size": 63488 00:26:32.047 }, 00:26:32.047 { 00:26:32.047 "name": "pt2", 00:26:32.047 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:32.047 "is_configured": true, 00:26:32.047 "data_offset": 2048, 00:26:32.047 "data_size": 63488 00:26:32.047 }, 00:26:32.047 { 00:26:32.047 "name": "pt3", 00:26:32.047 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:32.047 "is_configured": true, 00:26:32.047 "data_offset": 2048, 00:26:32.047 "data_size": 63488 00:26:32.047 }, 00:26:32.047 { 00:26:32.047 "name": null, 00:26:32.047 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:32.047 "is_configured": false, 00:26:32.047 "data_offset": 2048, 00:26:32.047 "data_size": 63488 00:26:32.047 } 00:26:32.047 ] 00:26:32.047 }' 00:26:32.047 05:09:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:32.047 05:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:32.613 05:09:02 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:32.613 05:09:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:32.613 05:09:02 -- bdev/bdev_raid.sh@462 -- # i=3 00:26:32.613 05:09:02 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:32.871 [2024-04-27 05:09:02.574785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:32.871 [2024-04-27 05:09:02.575123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.871 [2024-04-27 05:09:02.575218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:32.871 [2024-04-27 05:09:02.575486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.871 [2024-04-27 05:09:02.576144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.871 [2024-04-27 05:09:02.576229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:32.871 [2024-04-27 05:09:02.576365] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:32.871 [2024-04-27 05:09:02.576424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:32.871 [2024-04-27 05:09:02.576616] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:26:32.871 [2024-04-27 05:09:02.576899] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:32.871 [2024-04-27 05:09:02.577049] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:32.871 [2024-04-27 05:09:02.578062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:26:32.871 [2024-04-27 05:09:02.578221] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:26:32.871 [2024-04-27 05:09:02.578576] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.871 pt4 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.871 05:09:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.129 05:09:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.129 "name": "raid_bdev1", 00:26:33.129 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:33.129 "strip_size_kb": 64, 00:26:33.129 "state": "online", 00:26:33.129 "raid_level": "raid5f", 00:26:33.129 "superblock": true, 00:26:33.129 "num_base_bdevs": 4, 00:26:33.129 "num_base_bdevs_discovered": 3, 00:26:33.129 "num_base_bdevs_operational": 3, 00:26:33.129 "base_bdevs_list": [ 00:26:33.129 { 00:26:33.129 "name": null, 00:26:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.129 "is_configured": false, 00:26:33.129 "data_offset": 2048, 00:26:33.129 "data_size": 63488 00:26:33.129 }, 00:26:33.129 { 00:26:33.129 "name": "pt2", 00:26:33.129 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:33.129 "is_configured": true, 00:26:33.129 "data_offset": 2048, 00:26:33.129 "data_size": 63488 00:26:33.129 }, 00:26:33.129 { 00:26:33.129 "name": "pt3", 00:26:33.129 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:33.129 "is_configured": true, 00:26:33.129 "data_offset": 2048, 00:26:33.129 "data_size": 63488 00:26:33.129 }, 00:26:33.129 { 00:26:33.129 "name": "pt4", 00:26:33.129 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:33.129 "is_configured": true, 00:26:33.129 "data_offset": 2048, 00:26:33.129 "data_size": 63488 00:26:33.129 } 00:26:33.129 ] 00:26:33.129 }' 00:26:33.129 05:09:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.129 05:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:33.694 05:09:03 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:26:33.694 05:09:03 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:33.951 [2024-04-27 05:09:03.767235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:33.951 [2024-04-27 05:09:03.767449] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:33.951 [2024-04-27 05:09:03.767653] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.951 [2024-04-27 05:09:03.767848] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.951 [2024-04-27 05:09:03.767970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:26:33.951 05:09:03 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.951 05:09:03 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:34.210 05:09:04 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:34.210 05:09:04 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:34.210 05:09:04 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:34.467 [2024-04-27 05:09:04.223347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:34.467 [2024-04-27 05:09:04.223648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.467 [2024-04-27 05:09:04.223827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:34.467 [2024-04-27 05:09:04.223952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.467 [2024-04-27 05:09:04.226898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.467 [2024-04-27 05:09:04.227107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:34.467 [2024-04-27 05:09:04.227331] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:34.467 [2024-04-27 05:09:04.227481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:34.467 pt1 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.467 05:09:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.734 05:09:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:34.734 "name": "raid_bdev1", 00:26:34.734 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:34.734 "strip_size_kb": 64, 00:26:34.734 "state": "configuring", 00:26:34.734 "raid_level": "raid5f", 00:26:34.734 "superblock": true, 00:26:34.734 "num_base_bdevs": 4, 00:26:34.734 "num_base_bdevs_discovered": 1, 00:26:34.734 "num_base_bdevs_operational": 4, 00:26:34.734 "base_bdevs_list": [ 00:26:34.734 { 00:26:34.734 "name": "pt1", 00:26:34.734 "uuid": "51328a16-eb80-5303-928d-53105c0e723b", 00:26:34.734 "is_configured": true, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": null, 00:26:34.734 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:34.734 "is_configured": false, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": null, 00:26:34.734 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:34.734 "is_configured": false, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": null, 00:26:34.734 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:34.734 "is_configured": false, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 } 00:26:34.734 ] 00:26:34.734 }' 00:26:34.734 05:09:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:34.734 05:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:35.315 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:35.315 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:35.315 05:09:05 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:35.573 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:35.573 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:35.573 05:09:05 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:35.832 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:35.832 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:35.832 05:09:05 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:36.090 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:36.090 05:09:05 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:36.090 05:09:05 -- bdev/bdev_raid.sh@489 -- # i=3 00:26:36.090 05:09:05 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:36.349 [2024-04-27 05:09:06.139954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:36.349 [2024-04-27 05:09:06.140260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.349 [2024-04-27 05:09:06.140346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:36.349 [2024-04-27 05:09:06.140650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.349 [2024-04-27 05:09:06.141248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.349 [2024-04-27 05:09:06.141344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:36.349 [2024-04-27 05:09:06.141487] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:36.349 [2024-04-27 05:09:06.141536] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:36.349 [2024-04-27 05:09:06.141568] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.349 [2024-04-27 05:09:06.141635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:26:36.349 [2024-04-27 05:09:06.141741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:36.349 pt4 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.349 05:09:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.607 05:09:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:36.607 "name": "raid_bdev1", 00:26:36.607 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:36.607 "strip_size_kb": 64, 00:26:36.607 "state": "configuring", 00:26:36.607 "raid_level": "raid5f", 00:26:36.607 "superblock": true, 00:26:36.607 "num_base_bdevs": 4, 00:26:36.607 "num_base_bdevs_discovered": 1, 00:26:36.607 "num_base_bdevs_operational": 3, 00:26:36.607 "base_bdevs_list": [ 00:26:36.607 { 00:26:36.607 "name": null, 00:26:36.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.607 "is_configured": false, 00:26:36.607 "data_offset": 2048, 00:26:36.607 "data_size": 63488 00:26:36.607 }, 00:26:36.607 { 00:26:36.607 "name": null, 00:26:36.607 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:36.607 "is_configured": false, 00:26:36.607 "data_offset": 2048, 00:26:36.607 "data_size": 63488 00:26:36.607 }, 00:26:36.607 { 00:26:36.607 "name": null, 00:26:36.607 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:36.607 "is_configured": false, 00:26:36.607 "data_offset": 2048, 00:26:36.607 "data_size": 63488 00:26:36.607 }, 00:26:36.607 { 00:26:36.607 "name": "pt4", 00:26:36.607 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:36.607 "is_configured": true, 00:26:36.607 "data_offset": 2048, 00:26:36.607 "data_size": 63488 00:26:36.607 } 00:26:36.607 ] 00:26:36.607 }' 00:26:36.607 05:09:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:36.607 05:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:37.173 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:37.173 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:37.173 05:09:07 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:37.431 [2024-04-27 05:09:07.292239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:37.431 [2024-04-27 05:09:07.292571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.431 [2024-04-27 05:09:07.292783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:26:37.431 [2024-04-27 05:09:07.292927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.431 [2024-04-27 05:09:07.293518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.431 [2024-04-27 05:09:07.293695] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:37.431 [2024-04-27 05:09:07.293905] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:37.431 [2024-04-27 05:09:07.294068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:37.431 pt2 00:26:37.431 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:37.431 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:37.431 05:09:07 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:37.689 [2024-04-27 05:09:07.560362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:37.689 [2024-04-27 05:09:07.560739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.689 [2024-04-27 05:09:07.560829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:26:37.689 [2024-04-27 05:09:07.561071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.689 [2024-04-27 05:09:07.561675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.690 [2024-04-27 05:09:07.561852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:37.690 [2024-04-27 05:09:07.562070] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:37.690 [2024-04-27 05:09:07.562209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:37.690 [2024-04-27 05:09:07.562435] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:26:37.690 [2024-04-27 05:09:07.562550] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:37.690 [2024-04-27 05:09:07.562764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:37.690 [2024-04-27 05:09:07.563824] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:26:37.690 [2024-04-27 05:09:07.563955] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:26:37.690 [2024-04-27 05:09:07.564298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.690 pt3 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.690 05:09:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.947 05:09:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:37.947 "name": "raid_bdev1", 00:26:37.947 "uuid": "dc6a77c5-6312-417d-a777-cd6278f5868c", 00:26:37.947 "strip_size_kb": 64, 00:26:37.947 "state": "online", 00:26:37.947 "raid_level": "raid5f", 00:26:37.947 "superblock": true, 00:26:37.947 "num_base_bdevs": 4, 00:26:37.947 "num_base_bdevs_discovered": 3, 00:26:37.947 "num_base_bdevs_operational": 3, 00:26:37.947 "base_bdevs_list": [ 00:26:37.947 { 00:26:37.948 "name": null, 00:26:37.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.948 "is_configured": false, 00:26:37.948 "data_offset": 2048, 00:26:37.948 "data_size": 63488 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "pt2", 00:26:37.948 "uuid": "a7109a6d-eab7-578e-9376-552d7fb3f615", 00:26:37.948 "is_configured": true, 00:26:37.948 "data_offset": 2048, 00:26:37.948 "data_size": 63488 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "pt3", 00:26:37.948 "uuid": "47b43ad2-7834-56a9-8a06-68a6ef47a5b3", 00:26:37.948 "is_configured": true, 00:26:37.948 "data_offset": 2048, 00:26:37.948 "data_size": 63488 00:26:37.948 }, 00:26:37.948 { 00:26:37.948 "name": "pt4", 00:26:37.948 "uuid": "f26d9a4d-4a49-5e2f-a754-24f619caf1ac", 00:26:37.948 "is_configured": true, 00:26:37.948 "data_offset": 2048, 00:26:37.948 "data_size": 63488 00:26:37.948 } 00:26:37.948 ] 00:26:37.948 }' 00:26:37.948 05:09:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:37.948 05:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:38.883 05:09:08 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:38.883 05:09:08 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:38.883 [2024-04-27 05:09:08.717373] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:38.883 05:09:08 -- bdev/bdev_raid.sh@506 -- # '[' dc6a77c5-6312-417d-a777-cd6278f5868c '!=' dc6a77c5-6312-417d-a777-cd6278f5868c ']' 00:26:38.883 05:09:08 -- bdev/bdev_raid.sh@511 -- # killprocess 142769 00:26:38.883 05:09:08 -- common/autotest_common.sh@926 -- # '[' -z 142769 ']' 00:26:38.883 05:09:08 -- common/autotest_common.sh@930 -- # kill -0 142769 00:26:38.883 05:09:08 -- common/autotest_common.sh@931 -- # uname 00:26:38.883 05:09:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:38.883 05:09:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142769 00:26:38.883 05:09:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:38.883 05:09:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:38.883 05:09:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142769' 00:26:38.883 killing process with pid 142769 00:26:38.883 05:09:08 -- common/autotest_common.sh@945 -- # kill 142769 00:26:38.883 [2024-04-27 05:09:08.761906] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:38.883 05:09:08 -- common/autotest_common.sh@950 -- # wait 142769 00:26:38.883 [2024-04-27 05:09:08.762165] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:38.883 [2024-04-27 05:09:08.762387] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:38.883 [2024-04-27 05:09:08.762551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:26:39.141 [2024-04-27 05:09:08.840015] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:39.400 ************************************ 00:26:39.400 END TEST raid5f_superblock_test 00:26:39.400 ************************************ 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:39.400 00:26:39.400 real 0m22.770s 00:26:39.400 user 0m42.226s 00:26:39.400 sys 0m3.203s 00:26:39.400 05:09:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.400 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:26:39.400 05:09:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:39.400 05:09:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:39.400 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:39.400 ************************************ 00:26:39.400 START TEST raid5f_rebuild_test 00:26:39.400 ************************************ 00:26:39.400 05:09:09 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=143451 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 143451 /var/tmp/spdk-raid.sock 00:26:39.400 05:09:09 -- common/autotest_common.sh@819 -- # '[' -z 143451 ']' 00:26:39.400 05:09:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:39.400 05:09:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:39.400 05:09:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:39.400 05:09:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:39.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:39.400 05:09:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:39.400 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:39.400 [2024-04-27 05:09:09.313874] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:39.400 [2024-04-27 05:09:09.314104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143451 ] 00:26:39.400 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:39.400 Zero copy mechanism will not be used. 00:26:39.659 [2024-04-27 05:09:09.476277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.918 [2024-04-27 05:09:09.600640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.918 [2024-04-27 05:09:09.681477] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.485 05:09:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:40.485 05:09:10 -- common/autotest_common.sh@852 -- # return 0 00:26:40.485 05:09:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:40.485 05:09:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:40.485 05:09:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:40.744 BaseBdev1 00:26:40.744 05:09:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:40.744 05:09:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:40.744 05:09:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:41.003 BaseBdev2 00:26:41.003 05:09:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:41.003 05:09:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:41.003 05:09:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:41.262 BaseBdev3 00:26:41.262 05:09:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:41.262 05:09:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:41.262 05:09:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:41.522 BaseBdev4 00:26:41.522 05:09:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:41.781 spare_malloc 00:26:41.781 05:09:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:42.039 spare_delay 00:26:42.040 05:09:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:42.304 [2024-04-27 05:09:11.994444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:42.304 [2024-04-27 05:09:11.994668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.304 [2024-04-27 05:09:11.994731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:42.304 [2024-04-27 05:09:11.994798] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.304 [2024-04-27 05:09:11.998304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.304 [2024-04-27 05:09:11.998396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:42.304 spare 00:26:42.304 05:09:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:42.304 [2024-04-27 05:09:12.210813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:42.304 [2024-04-27 05:09:12.213403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.304 [2024-04-27 05:09:12.213462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:42.304 [2024-04-27 05:09:12.213506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:42.304 [2024-04-27 05:09:12.213612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:26:42.304 [2024-04-27 05:09:12.213627] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:42.304 [2024-04-27 05:09:12.213819] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:42.304 [2024-04-27 05:09:12.214810] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:26:42.304 [2024-04-27 05:09:12.214834] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:26:42.304 [2024-04-27 05:09:12.215145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.574 05:09:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.845 05:09:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:42.845 "name": "raid_bdev1", 00:26:42.845 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:42.845 "strip_size_kb": 64, 00:26:42.845 "state": "online", 00:26:42.845 "raid_level": "raid5f", 00:26:42.845 "superblock": false, 00:26:42.845 "num_base_bdevs": 4, 00:26:42.845 "num_base_bdevs_discovered": 4, 00:26:42.845 "num_base_bdevs_operational": 4, 00:26:42.845 "base_bdevs_list": [ 00:26:42.845 { 00:26:42.845 "name": "BaseBdev1", 00:26:42.845 "uuid": "63f34d87-4fac-4cf2-86b5-ab59f486a6b7", 00:26:42.845 "is_configured": true, 00:26:42.845 "data_offset": 0, 00:26:42.845 "data_size": 65536 00:26:42.845 }, 00:26:42.845 { 00:26:42.845 "name": "BaseBdev2", 00:26:42.845 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:42.845 "is_configured": true, 00:26:42.845 "data_offset": 0, 00:26:42.845 "data_size": 65536 00:26:42.845 }, 00:26:42.845 { 00:26:42.845 "name": "BaseBdev3", 00:26:42.845 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:42.845 "is_configured": true, 00:26:42.845 "data_offset": 0, 00:26:42.845 "data_size": 65536 00:26:42.845 }, 00:26:42.845 { 00:26:42.845 "name": "BaseBdev4", 00:26:42.845 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:42.845 "is_configured": true, 00:26:42.845 "data_offset": 0, 00:26:42.845 "data_size": 65536 00:26:42.845 } 00:26:42.845 ] 00:26:42.845 }' 00:26:42.845 05:09:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:42.845 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:43.414 05:09:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.414 05:09:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:43.672 [2024-04-27 05:09:13.347693] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.672 05:09:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:26:43.672 05:09:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.672 05:09:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:43.931 05:09:13 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:43.931 05:09:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:43.931 05:09:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:43.931 05:09:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@12 -- # local i 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:43.931 05:09:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:43.931 [2024-04-27 05:09:13.843716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:44.188 /dev/nbd0 00:26:44.188 05:09:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:44.188 05:09:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:44.188 05:09:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:44.188 05:09:13 -- common/autotest_common.sh@857 -- # local i 00:26:44.188 05:09:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:44.188 05:09:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:44.188 05:09:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:44.188 05:09:13 -- common/autotest_common.sh@861 -- # break 00:26:44.188 05:09:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:44.188 05:09:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:44.188 05:09:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:44.188 1+0 records in 00:26:44.188 1+0 records out 00:26:44.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427469 s, 9.6 MB/s 00:26:44.188 05:09:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:44.188 05:09:13 -- common/autotest_common.sh@874 -- # size=4096 00:26:44.188 05:09:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:44.188 05:09:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:44.188 05:09:13 -- common/autotest_common.sh@877 -- # return 0 00:26:44.188 05:09:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:44.188 05:09:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:44.188 05:09:13 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:44.188 05:09:13 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:44.188 05:09:13 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:44.188 05:09:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:44.755 512+0 records in 00:26:44.755 512+0 records out 00:26:44.755 100663296 bytes (101 MB, 96 MiB) copied, 0.55638 s, 181 MB/s 00:26:44.755 05:09:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@51 -- # local i 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.755 05:09:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:45.014 [2024-04-27 05:09:14.734439] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@41 -- # break 00:26:45.014 05:09:14 -- bdev/nbd_common.sh@45 -- # return 0 00:26:45.014 05:09:14 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:45.276 [2024-04-27 05:09:14.950169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:45.276 05:09:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:45.276 05:09:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:45.276 05:09:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.277 05:09:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.542 05:09:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:45.542 "name": "raid_bdev1", 00:26:45.542 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:45.542 "strip_size_kb": 64, 00:26:45.542 "state": "online", 00:26:45.542 "raid_level": "raid5f", 00:26:45.542 "superblock": false, 00:26:45.542 "num_base_bdevs": 4, 00:26:45.542 "num_base_bdevs_discovered": 3, 00:26:45.542 "num_base_bdevs_operational": 3, 00:26:45.542 "base_bdevs_list": [ 00:26:45.542 { 00:26:45.542 "name": null, 00:26:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.542 "is_configured": false, 00:26:45.542 "data_offset": 0, 00:26:45.542 "data_size": 65536 00:26:45.542 }, 00:26:45.542 { 00:26:45.542 "name": "BaseBdev2", 00:26:45.542 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:45.542 "is_configured": true, 00:26:45.542 "data_offset": 0, 00:26:45.542 "data_size": 65536 00:26:45.542 }, 00:26:45.542 { 00:26:45.542 "name": "BaseBdev3", 00:26:45.542 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:45.542 "is_configured": true, 00:26:45.542 "data_offset": 0, 00:26:45.542 "data_size": 65536 00:26:45.542 }, 00:26:45.542 { 00:26:45.542 "name": "BaseBdev4", 00:26:45.542 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:45.542 "is_configured": true, 00:26:45.542 "data_offset": 0, 00:26:45.542 "data_size": 65536 00:26:45.542 } 00:26:45.542 ] 00:26:45.542 }' 00:26:45.542 05:09:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:45.542 05:09:15 -- common/autotest_common.sh@10 -- # set +x 00:26:46.113 05:09:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:46.372 [2024-04-27 05:09:16.078464] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:46.372 [2024-04-27 05:09:16.078585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:46.372 [2024-04-27 05:09:16.085273] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:26:46.372 [2024-04-27 05:09:16.088592] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:46.372 05:09:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.312 05:09:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.570 05:09:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:47.570 "name": "raid_bdev1", 00:26:47.570 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:47.570 "strip_size_kb": 64, 00:26:47.570 "state": "online", 00:26:47.570 "raid_level": "raid5f", 00:26:47.570 "superblock": false, 00:26:47.571 "num_base_bdevs": 4, 00:26:47.571 "num_base_bdevs_discovered": 4, 00:26:47.571 "num_base_bdevs_operational": 4, 00:26:47.571 "process": { 00:26:47.571 "type": "rebuild", 00:26:47.571 "target": "spare", 00:26:47.571 "progress": { 00:26:47.571 "blocks": 23040, 00:26:47.571 "percent": 11 00:26:47.571 } 00:26:47.571 }, 00:26:47.571 "base_bdevs_list": [ 00:26:47.571 { 00:26:47.571 "name": "spare", 00:26:47.571 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:47.571 "is_configured": true, 00:26:47.571 "data_offset": 0, 00:26:47.571 "data_size": 65536 00:26:47.571 }, 00:26:47.571 { 00:26:47.571 "name": "BaseBdev2", 00:26:47.571 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:47.571 "is_configured": true, 00:26:47.571 "data_offset": 0, 00:26:47.571 "data_size": 65536 00:26:47.571 }, 00:26:47.571 { 00:26:47.571 "name": "BaseBdev3", 00:26:47.571 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:47.571 "is_configured": true, 00:26:47.571 "data_offset": 0, 00:26:47.571 "data_size": 65536 00:26:47.571 }, 00:26:47.571 { 00:26:47.571 "name": "BaseBdev4", 00:26:47.571 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:47.571 "is_configured": true, 00:26:47.571 "data_offset": 0, 00:26:47.571 "data_size": 65536 00:26:47.571 } 00:26:47.571 ] 00:26:47.571 }' 00:26:47.571 05:09:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:47.571 05:09:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:47.571 05:09:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:47.571 05:09:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.571 05:09:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:47.829 [2024-04-27 05:09:17.691845] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.829 [2024-04-27 05:09:17.709293] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:47.829 [2024-04-27 05:09:17.709475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.087 05:09:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:48.088 "name": "raid_bdev1", 00:26:48.088 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:48.088 "strip_size_kb": 64, 00:26:48.088 "state": "online", 00:26:48.088 "raid_level": "raid5f", 00:26:48.088 "superblock": false, 00:26:48.088 "num_base_bdevs": 4, 00:26:48.088 "num_base_bdevs_discovered": 3, 00:26:48.088 "num_base_bdevs_operational": 3, 00:26:48.088 "base_bdevs_list": [ 00:26:48.088 { 00:26:48.088 "name": null, 00:26:48.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.088 "is_configured": false, 00:26:48.088 "data_offset": 0, 00:26:48.088 "data_size": 65536 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "BaseBdev2", 00:26:48.088 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:48.088 "is_configured": true, 00:26:48.088 "data_offset": 0, 00:26:48.088 "data_size": 65536 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "BaseBdev3", 00:26:48.088 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:48.088 "is_configured": true, 00:26:48.088 "data_offset": 0, 00:26:48.088 "data_size": 65536 00:26:48.088 }, 00:26:48.088 { 00:26:48.088 "name": "BaseBdev4", 00:26:48.088 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:48.088 "is_configured": true, 00:26:48.088 "data_offset": 0, 00:26:48.088 "data_size": 65536 00:26:48.088 } 00:26:48.088 ] 00:26:48.088 }' 00:26:48.088 05:09:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:48.088 05:09:17 -- common/autotest_common.sh@10 -- # set +x 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.023 05:09:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:49.281 "name": "raid_bdev1", 00:26:49.281 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:49.281 "strip_size_kb": 64, 00:26:49.281 "state": "online", 00:26:49.281 "raid_level": "raid5f", 00:26:49.281 "superblock": false, 00:26:49.281 "num_base_bdevs": 4, 00:26:49.281 "num_base_bdevs_discovered": 3, 00:26:49.281 "num_base_bdevs_operational": 3, 00:26:49.281 "base_bdevs_list": [ 00:26:49.281 { 00:26:49.281 "name": null, 00:26:49.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.281 "is_configured": false, 00:26:49.281 "data_offset": 0, 00:26:49.281 "data_size": 65536 00:26:49.281 }, 00:26:49.281 { 00:26:49.281 "name": "BaseBdev2", 00:26:49.281 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:49.281 "is_configured": true, 00:26:49.281 "data_offset": 0, 00:26:49.281 "data_size": 65536 00:26:49.281 }, 00:26:49.281 { 00:26:49.281 "name": "BaseBdev3", 00:26:49.281 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:49.281 "is_configured": true, 00:26:49.281 "data_offset": 0, 00:26:49.281 "data_size": 65536 00:26:49.281 }, 00:26:49.281 { 00:26:49.281 "name": "BaseBdev4", 00:26:49.281 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:49.281 "is_configured": true, 00:26:49.281 "data_offset": 0, 00:26:49.281 "data_size": 65536 00:26:49.281 } 00:26:49.281 ] 00:26:49.281 }' 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:49.281 05:09:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:49.539 [2024-04-27 05:09:19.396895] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:49.539 [2024-04-27 05:09:19.396973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:49.539 [2024-04-27 05:09:19.403864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:26:49.539 [2024-04-27 05:09:19.406855] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:49.539 05:09:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:50.919 05:09:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:50.920 "name": "raid_bdev1", 00:26:50.920 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:50.920 "strip_size_kb": 64, 00:26:50.920 "state": "online", 00:26:50.920 "raid_level": "raid5f", 00:26:50.920 "superblock": false, 00:26:50.920 "num_base_bdevs": 4, 00:26:50.920 "num_base_bdevs_discovered": 4, 00:26:50.920 "num_base_bdevs_operational": 4, 00:26:50.920 "process": { 00:26:50.920 "type": "rebuild", 00:26:50.920 "target": "spare", 00:26:50.920 "progress": { 00:26:50.920 "blocks": 23040, 00:26:50.920 "percent": 11 00:26:50.920 } 00:26:50.920 }, 00:26:50.920 "base_bdevs_list": [ 00:26:50.920 { 00:26:50.920 "name": "spare", 00:26:50.920 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:50.920 "is_configured": true, 00:26:50.920 "data_offset": 0, 00:26:50.920 "data_size": 65536 00:26:50.920 }, 00:26:50.920 { 00:26:50.920 "name": "BaseBdev2", 00:26:50.920 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:50.920 "is_configured": true, 00:26:50.920 "data_offset": 0, 00:26:50.920 "data_size": 65536 00:26:50.920 }, 00:26:50.920 { 00:26:50.920 "name": "BaseBdev3", 00:26:50.920 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:50.920 "is_configured": true, 00:26:50.920 "data_offset": 0, 00:26:50.920 "data_size": 65536 00:26:50.920 }, 00:26:50.920 { 00:26:50.920 "name": "BaseBdev4", 00:26:50.920 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:50.920 "is_configured": true, 00:26:50.920 "data_offset": 0, 00:26:50.920 "data_size": 65536 00:26:50.920 } 00:26:50.920 ] 00:26:50.920 }' 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.920 05:09:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@657 -- # local timeout=726 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.178 05:09:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.178 05:09:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:51.178 "name": "raid_bdev1", 00:26:51.178 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:51.178 "strip_size_kb": 64, 00:26:51.178 "state": "online", 00:26:51.178 "raid_level": "raid5f", 00:26:51.178 "superblock": false, 00:26:51.178 "num_base_bdevs": 4, 00:26:51.178 "num_base_bdevs_discovered": 4, 00:26:51.178 "num_base_bdevs_operational": 4, 00:26:51.178 "process": { 00:26:51.178 "type": "rebuild", 00:26:51.179 "target": "spare", 00:26:51.179 "progress": { 00:26:51.179 "blocks": 30720, 00:26:51.179 "percent": 15 00:26:51.179 } 00:26:51.179 }, 00:26:51.179 "base_bdevs_list": [ 00:26:51.179 { 00:26:51.179 "name": "spare", 00:26:51.179 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:51.179 "is_configured": true, 00:26:51.179 "data_offset": 0, 00:26:51.179 "data_size": 65536 00:26:51.179 }, 00:26:51.179 { 00:26:51.179 "name": "BaseBdev2", 00:26:51.179 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:51.179 "is_configured": true, 00:26:51.179 "data_offset": 0, 00:26:51.179 "data_size": 65536 00:26:51.179 }, 00:26:51.179 { 00:26:51.179 "name": "BaseBdev3", 00:26:51.179 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:51.179 "is_configured": true, 00:26:51.179 "data_offset": 0, 00:26:51.179 "data_size": 65536 00:26:51.179 }, 00:26:51.179 { 00:26:51.179 "name": "BaseBdev4", 00:26:51.179 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:51.179 "is_configured": true, 00:26:51.179 "data_offset": 0, 00:26:51.179 "data_size": 65536 00:26:51.179 } 00:26:51.179 ] 00:26:51.179 }' 00:26:51.179 05:09:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:51.437 05:09:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.437 05:09:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:51.437 05:09:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.437 05:09:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.371 05:09:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:52.629 "name": "raid_bdev1", 00:26:52.629 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:52.629 "strip_size_kb": 64, 00:26:52.629 "state": "online", 00:26:52.629 "raid_level": "raid5f", 00:26:52.629 "superblock": false, 00:26:52.629 "num_base_bdevs": 4, 00:26:52.629 "num_base_bdevs_discovered": 4, 00:26:52.629 "num_base_bdevs_operational": 4, 00:26:52.629 "process": { 00:26:52.629 "type": "rebuild", 00:26:52.629 "target": "spare", 00:26:52.629 "progress": { 00:26:52.629 "blocks": 55680, 00:26:52.629 "percent": 28 00:26:52.629 } 00:26:52.629 }, 00:26:52.629 "base_bdevs_list": [ 00:26:52.629 { 00:26:52.629 "name": "spare", 00:26:52.629 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:52.629 "is_configured": true, 00:26:52.629 "data_offset": 0, 00:26:52.629 "data_size": 65536 00:26:52.629 }, 00:26:52.629 { 00:26:52.629 "name": "BaseBdev2", 00:26:52.629 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:52.629 "is_configured": true, 00:26:52.629 "data_offset": 0, 00:26:52.629 "data_size": 65536 00:26:52.629 }, 00:26:52.629 { 00:26:52.629 "name": "BaseBdev3", 00:26:52.629 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:52.629 "is_configured": true, 00:26:52.629 "data_offset": 0, 00:26:52.629 "data_size": 65536 00:26:52.629 }, 00:26:52.629 { 00:26:52.629 "name": "BaseBdev4", 00:26:52.629 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:52.629 "is_configured": true, 00:26:52.629 "data_offset": 0, 00:26:52.629 "data_size": 65536 00:26:52.629 } 00:26:52.629 ] 00:26:52.629 }' 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.629 05:09:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:54.007 "name": "raid_bdev1", 00:26:54.007 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:54.007 "strip_size_kb": 64, 00:26:54.007 "state": "online", 00:26:54.007 "raid_level": "raid5f", 00:26:54.007 "superblock": false, 00:26:54.007 "num_base_bdevs": 4, 00:26:54.007 "num_base_bdevs_discovered": 4, 00:26:54.007 "num_base_bdevs_operational": 4, 00:26:54.007 "process": { 00:26:54.007 "type": "rebuild", 00:26:54.007 "target": "spare", 00:26:54.007 "progress": { 00:26:54.007 "blocks": 82560, 00:26:54.007 "percent": 41 00:26:54.007 } 00:26:54.007 }, 00:26:54.007 "base_bdevs_list": [ 00:26:54.007 { 00:26:54.007 "name": "spare", 00:26:54.007 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:54.007 "is_configured": true, 00:26:54.007 "data_offset": 0, 00:26:54.007 "data_size": 65536 00:26:54.007 }, 00:26:54.007 { 00:26:54.007 "name": "BaseBdev2", 00:26:54.007 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:54.007 "is_configured": true, 00:26:54.007 "data_offset": 0, 00:26:54.007 "data_size": 65536 00:26:54.007 }, 00:26:54.007 { 00:26:54.007 "name": "BaseBdev3", 00:26:54.007 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:54.007 "is_configured": true, 00:26:54.007 "data_offset": 0, 00:26:54.007 "data_size": 65536 00:26:54.007 }, 00:26:54.007 { 00:26:54.007 "name": "BaseBdev4", 00:26:54.007 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:54.007 "is_configured": true, 00:26:54.007 "data_offset": 0, 00:26:54.007 "data_size": 65536 00:26:54.007 } 00:26:54.007 ] 00:26:54.007 }' 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.007 05:09:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.405 05:09:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:55.405 "name": "raid_bdev1", 00:26:55.405 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:55.405 "strip_size_kb": 64, 00:26:55.405 "state": "online", 00:26:55.405 "raid_level": "raid5f", 00:26:55.405 "superblock": false, 00:26:55.405 "num_base_bdevs": 4, 00:26:55.405 "num_base_bdevs_discovered": 4, 00:26:55.405 "num_base_bdevs_operational": 4, 00:26:55.405 "process": { 00:26:55.405 "type": "rebuild", 00:26:55.405 "target": "spare", 00:26:55.405 "progress": { 00:26:55.405 "blocks": 107520, 00:26:55.405 "percent": 54 00:26:55.405 } 00:26:55.405 }, 00:26:55.405 "base_bdevs_list": [ 00:26:55.405 { 00:26:55.405 "name": "spare", 00:26:55.405 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:55.405 "is_configured": true, 00:26:55.405 "data_offset": 0, 00:26:55.405 "data_size": 65536 00:26:55.405 }, 00:26:55.405 { 00:26:55.405 "name": "BaseBdev2", 00:26:55.405 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:55.405 "is_configured": true, 00:26:55.405 "data_offset": 0, 00:26:55.405 "data_size": 65536 00:26:55.405 }, 00:26:55.405 { 00:26:55.405 "name": "BaseBdev3", 00:26:55.405 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:55.405 "is_configured": true, 00:26:55.405 "data_offset": 0, 00:26:55.405 "data_size": 65536 00:26:55.405 }, 00:26:55.405 { 00:26:55.405 "name": "BaseBdev4", 00:26:55.405 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:55.405 "is_configured": true, 00:26:55.405 "data_offset": 0, 00:26:55.405 "data_size": 65536 00:26:55.405 } 00:26:55.405 ] 00:26:55.405 }' 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.405 05:09:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:56.780 "name": "raid_bdev1", 00:26:56.780 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:56.780 "strip_size_kb": 64, 00:26:56.780 "state": "online", 00:26:56.780 "raid_level": "raid5f", 00:26:56.780 "superblock": false, 00:26:56.780 "num_base_bdevs": 4, 00:26:56.780 "num_base_bdevs_discovered": 4, 00:26:56.780 "num_base_bdevs_operational": 4, 00:26:56.780 "process": { 00:26:56.780 "type": "rebuild", 00:26:56.780 "target": "spare", 00:26:56.780 "progress": { 00:26:56.780 "blocks": 134400, 00:26:56.780 "percent": 68 00:26:56.780 } 00:26:56.780 }, 00:26:56.780 "base_bdevs_list": [ 00:26:56.780 { 00:26:56.780 "name": "spare", 00:26:56.780 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:56.780 "is_configured": true, 00:26:56.780 "data_offset": 0, 00:26:56.780 "data_size": 65536 00:26:56.780 }, 00:26:56.780 { 00:26:56.780 "name": "BaseBdev2", 00:26:56.780 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:56.780 "is_configured": true, 00:26:56.780 "data_offset": 0, 00:26:56.780 "data_size": 65536 00:26:56.780 }, 00:26:56.780 { 00:26:56.780 "name": "BaseBdev3", 00:26:56.780 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:56.780 "is_configured": true, 00:26:56.780 "data_offset": 0, 00:26:56.780 "data_size": 65536 00:26:56.780 }, 00:26:56.780 { 00:26:56.780 "name": "BaseBdev4", 00:26:56.780 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:56.780 "is_configured": true, 00:26:56.780 "data_offset": 0, 00:26:56.780 "data_size": 65536 00:26:56.780 } 00:26:56.780 ] 00:26:56.780 }' 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.780 05:09:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.157 05:09:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:58.157 "name": "raid_bdev1", 00:26:58.157 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:58.157 "strip_size_kb": 64, 00:26:58.157 "state": "online", 00:26:58.157 "raid_level": "raid5f", 00:26:58.157 "superblock": false, 00:26:58.157 "num_base_bdevs": 4, 00:26:58.157 "num_base_bdevs_discovered": 4, 00:26:58.157 "num_base_bdevs_operational": 4, 00:26:58.157 "process": { 00:26:58.157 "type": "rebuild", 00:26:58.157 "target": "spare", 00:26:58.157 "progress": { 00:26:58.157 "blocks": 159360, 00:26:58.157 "percent": 81 00:26:58.157 } 00:26:58.157 }, 00:26:58.157 "base_bdevs_list": [ 00:26:58.157 { 00:26:58.157 "name": "spare", 00:26:58.157 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:58.157 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 }, 00:26:58.158 { 00:26:58.158 "name": "BaseBdev2", 00:26:58.158 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 }, 00:26:58.158 { 00:26:58.158 "name": "BaseBdev3", 00:26:58.158 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 }, 00:26:58.158 { 00:26:58.158 "name": "BaseBdev4", 00:26:58.158 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:58.158 "is_configured": true, 00:26:58.158 "data_offset": 0, 00:26:58.158 "data_size": 65536 00:26:58.158 } 00:26:58.158 ] 00:26:58.158 }' 00:26:58.158 05:09:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:58.158 05:09:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.158 05:09:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:58.158 05:09:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.158 05:09:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:59.543 "name": "raid_bdev1", 00:26:59.543 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:26:59.543 "strip_size_kb": 64, 00:26:59.543 "state": "online", 00:26:59.543 "raid_level": "raid5f", 00:26:59.543 "superblock": false, 00:26:59.543 "num_base_bdevs": 4, 00:26:59.543 "num_base_bdevs_discovered": 4, 00:26:59.543 "num_base_bdevs_operational": 4, 00:26:59.543 "process": { 00:26:59.543 "type": "rebuild", 00:26:59.543 "target": "spare", 00:26:59.543 "progress": { 00:26:59.543 "blocks": 186240, 00:26:59.543 "percent": 94 00:26:59.543 } 00:26:59.543 }, 00:26:59.543 "base_bdevs_list": [ 00:26:59.543 { 00:26:59.543 "name": "spare", 00:26:59.543 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:26:59.543 "is_configured": true, 00:26:59.543 "data_offset": 0, 00:26:59.543 "data_size": 65536 00:26:59.543 }, 00:26:59.543 { 00:26:59.543 "name": "BaseBdev2", 00:26:59.543 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:26:59.543 "is_configured": true, 00:26:59.543 "data_offset": 0, 00:26:59.543 "data_size": 65536 00:26:59.543 }, 00:26:59.543 { 00:26:59.543 "name": "BaseBdev3", 00:26:59.543 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:26:59.543 "is_configured": true, 00:26:59.543 "data_offset": 0, 00:26:59.543 "data_size": 65536 00:26:59.543 }, 00:26:59.543 { 00:26:59.543 "name": "BaseBdev4", 00:26:59.543 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:26:59.543 "is_configured": true, 00:26:59.543 "data_offset": 0, 00:26:59.543 "data_size": 65536 00:26:59.543 } 00:26:59.543 ] 00:26:59.543 }' 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.543 05:09:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:00.109 [2024-04-27 05:09:29.826347] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:00.109 [2024-04-27 05:09:29.826485] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:00.109 [2024-04-27 05:09:29.826597] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.676 05:09:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:00.954 "name": "raid_bdev1", 00:27:00.954 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:27:00.954 "strip_size_kb": 64, 00:27:00.954 "state": "online", 00:27:00.954 "raid_level": "raid5f", 00:27:00.954 "superblock": false, 00:27:00.954 "num_base_bdevs": 4, 00:27:00.954 "num_base_bdevs_discovered": 4, 00:27:00.954 "num_base_bdevs_operational": 4, 00:27:00.954 "base_bdevs_list": [ 00:27:00.954 { 00:27:00.954 "name": "spare", 00:27:00.954 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:27:00.954 "is_configured": true, 00:27:00.954 "data_offset": 0, 00:27:00.954 "data_size": 65536 00:27:00.954 }, 00:27:00.954 { 00:27:00.954 "name": "BaseBdev2", 00:27:00.954 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:27:00.954 "is_configured": true, 00:27:00.954 "data_offset": 0, 00:27:00.954 "data_size": 65536 00:27:00.954 }, 00:27:00.954 { 00:27:00.954 "name": "BaseBdev3", 00:27:00.954 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:27:00.954 "is_configured": true, 00:27:00.954 "data_offset": 0, 00:27:00.954 "data_size": 65536 00:27:00.954 }, 00:27:00.954 { 00:27:00.954 "name": "BaseBdev4", 00:27:00.954 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:27:00.954 "is_configured": true, 00:27:00.954 "data_offset": 0, 00:27:00.954 "data_size": 65536 00:27:00.954 } 00:27:00.954 ] 00:27:00.954 }' 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@660 -- # break 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.954 05:09:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.213 05:09:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:01.213 "name": "raid_bdev1", 00:27:01.213 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:27:01.213 "strip_size_kb": 64, 00:27:01.213 "state": "online", 00:27:01.213 "raid_level": "raid5f", 00:27:01.213 "superblock": false, 00:27:01.213 "num_base_bdevs": 4, 00:27:01.213 "num_base_bdevs_discovered": 4, 00:27:01.213 "num_base_bdevs_operational": 4, 00:27:01.213 "base_bdevs_list": [ 00:27:01.213 { 00:27:01.213 "name": "spare", 00:27:01.213 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:27:01.213 "is_configured": true, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 65536 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev2", 00:27:01.213 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:27:01.213 "is_configured": true, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 65536 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev3", 00:27:01.213 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:27:01.213 "is_configured": true, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 65536 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev4", 00:27:01.213 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:27:01.213 "is_configured": true, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 65536 00:27:01.213 } 00:27:01.213 ] 00:27:01.213 }' 00:27:01.213 05:09:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:01.213 05:09:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:01.213 05:09:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.471 05:09:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.729 05:09:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:01.729 "name": "raid_bdev1", 00:27:01.729 "uuid": "c33d9a5b-4cba-4a1c-895d-d095c1537b44", 00:27:01.729 "strip_size_kb": 64, 00:27:01.729 "state": "online", 00:27:01.729 "raid_level": "raid5f", 00:27:01.729 "superblock": false, 00:27:01.729 "num_base_bdevs": 4, 00:27:01.730 "num_base_bdevs_discovered": 4, 00:27:01.730 "num_base_bdevs_operational": 4, 00:27:01.730 "base_bdevs_list": [ 00:27:01.730 { 00:27:01.730 "name": "spare", 00:27:01.730 "uuid": "810b957f-1359-5cea-b9d7-f695611ddc2d", 00:27:01.730 "is_configured": true, 00:27:01.730 "data_offset": 0, 00:27:01.730 "data_size": 65536 00:27:01.730 }, 00:27:01.730 { 00:27:01.730 "name": "BaseBdev2", 00:27:01.730 "uuid": "aa9d83de-58d3-44dd-950a-be4753db869a", 00:27:01.730 "is_configured": true, 00:27:01.730 "data_offset": 0, 00:27:01.730 "data_size": 65536 00:27:01.730 }, 00:27:01.730 { 00:27:01.730 "name": "BaseBdev3", 00:27:01.730 "uuid": "1826a426-d014-4b69-84df-f1061db08bc2", 00:27:01.730 "is_configured": true, 00:27:01.730 "data_offset": 0, 00:27:01.730 "data_size": 65536 00:27:01.730 }, 00:27:01.730 { 00:27:01.730 "name": "BaseBdev4", 00:27:01.730 "uuid": "3b9110f9-f76c-4084-94f1-285a1bd9d8ed", 00:27:01.730 "is_configured": true, 00:27:01.730 "data_offset": 0, 00:27:01.730 "data_size": 65536 00:27:01.730 } 00:27:01.730 ] 00:27:01.730 }' 00:27:01.730 05:09:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:01.730 05:09:31 -- common/autotest_common.sh@10 -- # set +x 00:27:02.297 05:09:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.555 [2024-04-27 05:09:32.313751] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.555 [2024-04-27 05:09:32.313830] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.555 [2024-04-27 05:09:32.313978] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.555 [2024-04-27 05:09:32.314095] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.555 [2024-04-27 05:09:32.314110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:27:02.555 05:09:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:02.555 05:09:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.813 05:09:32 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:02.813 05:09:32 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:02.813 05:09:32 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@12 -- # local i 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.813 05:09:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:03.071 /dev/nbd0 00:27:03.071 05:09:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:03.071 05:09:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:03.071 05:09:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:03.071 05:09:32 -- common/autotest_common.sh@857 -- # local i 00:27:03.071 05:09:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:03.071 05:09:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:03.071 05:09:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:03.071 05:09:32 -- common/autotest_common.sh@861 -- # break 00:27:03.071 05:09:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:03.071 05:09:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:03.071 05:09:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.071 1+0 records in 00:27:03.071 1+0 records out 00:27:03.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670439 s, 6.1 MB/s 00:27:03.071 05:09:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.071 05:09:32 -- common/autotest_common.sh@874 -- # size=4096 00:27:03.071 05:09:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.071 05:09:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:03.071 05:09:32 -- common/autotest_common.sh@877 -- # return 0 00:27:03.071 05:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.071 05:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.071 05:09:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:03.329 /dev/nbd1 00:27:03.329 05:09:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:03.329 05:09:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:03.329 05:09:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:03.329 05:09:33 -- common/autotest_common.sh@857 -- # local i 00:27:03.329 05:09:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:03.329 05:09:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:03.329 05:09:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:03.329 05:09:33 -- common/autotest_common.sh@861 -- # break 00:27:03.329 05:09:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:03.329 05:09:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:03.329 05:09:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.329 1+0 records in 00:27:03.329 1+0 records out 00:27:03.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577052 s, 7.1 MB/s 00:27:03.329 05:09:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.329 05:09:33 -- common/autotest_common.sh@874 -- # size=4096 00:27:03.329 05:09:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.329 05:09:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:03.329 05:09:33 -- common/autotest_common.sh@877 -- # return 0 00:27:03.329 05:09:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.329 05:09:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.329 05:09:33 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:03.587 05:09:33 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@51 -- # local i 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@41 -- # break 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.587 05:09:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@41 -- # break 00:27:04.153 05:09:33 -- bdev/nbd_common.sh@45 -- # return 0 00:27:04.153 05:09:33 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:27:04.153 05:09:33 -- bdev/bdev_raid.sh@709 -- # killprocess 143451 00:27:04.153 05:09:33 -- common/autotest_common.sh@926 -- # '[' -z 143451 ']' 00:27:04.153 05:09:33 -- common/autotest_common.sh@930 -- # kill -0 143451 00:27:04.153 05:09:33 -- common/autotest_common.sh@931 -- # uname 00:27:04.153 05:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:04.153 05:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143451 00:27:04.153 05:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:04.153 05:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:04.153 05:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143451' 00:27:04.153 killing process with pid 143451 00:27:04.153 05:09:33 -- common/autotest_common.sh@945 -- # kill 143451 00:27:04.153 Received shutdown signal, test time was about 60.000000 seconds 00:27:04.153 00:27:04.153 Latency(us) 00:27:04.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.153 =================================================================================================================== 00:27:04.153 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:04.153 05:09:33 -- common/autotest_common.sh@950 -- # wait 143451 00:27:04.153 [2024-04-27 05:09:33.805947] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:04.153 [2024-04-27 05:09:33.873611] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:04.412 00:27:04.412 real 0m24.979s 00:27:04.412 user 0m37.075s 00:27:04.412 sys 0m3.099s 00:27:04.412 05:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.412 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:27:04.412 ************************************ 00:27:04.412 END TEST raid5f_rebuild_test 00:27:04.412 ************************************ 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:27:04.412 05:09:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:04.412 05:09:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.412 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:27:04.412 ************************************ 00:27:04.412 START TEST raid5f_rebuild_test_sb 00:27:04.412 ************************************ 00:27:04.412 05:09:34 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:04.412 05:09:34 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:04.413 05:09:34 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:27:04.413 05:09:34 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:27:04.413 05:09:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=144061 00:27:04.413 05:09:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 144061 /var/tmp/spdk-raid.sock 00:27:04.413 05:09:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:04.413 05:09:34 -- common/autotest_common.sh@819 -- # '[' -z 144061 ']' 00:27:04.413 05:09:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:04.413 05:09:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:04.413 05:09:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:04.413 05:09:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.413 05:09:34 -- common/autotest_common.sh@10 -- # set +x 00:27:04.672 [2024-04-27 05:09:34.355950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:04.672 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:04.672 Zero copy mechanism will not be used. 00:27:04.672 [2024-04-27 05:09:34.356181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144061 ] 00:27:04.672 [2024-04-27 05:09:34.514592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.930 [2024-04-27 05:09:34.626951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.930 [2024-04-27 05:09:34.711460] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.497 05:09:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.497 05:09:35 -- common/autotest_common.sh@852 -- # return 0 00:27:05.497 05:09:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:05.497 05:09:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:05.497 05:09:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:05.756 BaseBdev1_malloc 00:27:05.756 05:09:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:06.016 [2024-04-27 05:09:35.768863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:06.016 [2024-04-27 05:09:35.769303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.016 [2024-04-27 05:09:35.769464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:06.016 [2024-04-27 05:09:35.769618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.016 [2024-04-27 05:09:35.772695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.016 [2024-04-27 05:09:35.772884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:06.016 BaseBdev1 00:27:06.016 05:09:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:06.016 05:09:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:06.016 05:09:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:06.275 BaseBdev2_malloc 00:27:06.275 05:09:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:06.533 [2024-04-27 05:09:36.228975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:06.533 [2024-04-27 05:09:36.229217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.533 [2024-04-27 05:09:36.229311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:06.533 [2024-04-27 05:09:36.229601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.533 [2024-04-27 05:09:36.232263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.533 [2024-04-27 05:09:36.232422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:06.533 BaseBdev2 00:27:06.533 05:09:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:06.533 05:09:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:06.533 05:09:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:06.792 BaseBdev3_malloc 00:27:06.792 05:09:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:06.792 [2024-04-27 05:09:36.666578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:06.792 [2024-04-27 05:09:36.666908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.792 [2024-04-27 05:09:36.667001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:06.792 [2024-04-27 05:09:36.667233] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.792 [2024-04-27 05:09:36.670124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.792 [2024-04-27 05:09:36.670345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:06.792 BaseBdev3 00:27:06.792 05:09:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:06.792 05:09:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:06.792 05:09:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:07.051 BaseBdev4_malloc 00:27:07.051 05:09:36 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:07.309 [2024-04-27 05:09:37.137949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:07.309 [2024-04-27 05:09:37.138358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.309 [2024-04-27 05:09:37.138447] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:07.309 [2024-04-27 05:09:37.138609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.309 [2024-04-27 05:09:37.141518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.309 [2024-04-27 05:09:37.141698] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:07.309 BaseBdev4 00:27:07.309 05:09:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:07.569 spare_malloc 00:27:07.569 05:09:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:07.828 spare_delay 00:27:07.828 05:09:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:08.087 [2024-04-27 05:09:37.849700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:08.087 [2024-04-27 05:09:37.850079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.087 [2024-04-27 05:09:37.850163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:08.087 [2024-04-27 05:09:37.850335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.087 [2024-04-27 05:09:37.853163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.087 [2024-04-27 05:09:37.853349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:08.087 spare 00:27:08.087 05:09:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:08.345 [2024-04-27 05:09:38.066008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:08.345 [2024-04-27 05:09:38.068655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:08.346 [2024-04-27 05:09:38.068896] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:08.346 [2024-04-27 05:09:38.069009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:08.346 [2024-04-27 05:09:38.069344] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:08.346 [2024-04-27 05:09:38.069395] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:08.346 [2024-04-27 05:09:38.069681] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:08.346 [2024-04-27 05:09:38.070726] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:08.346 [2024-04-27 05:09:38.070876] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:08.346 [2024-04-27 05:09:38.071221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.346 05:09:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.605 05:09:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:08.605 "name": "raid_bdev1", 00:27:08.605 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:08.605 "strip_size_kb": 64, 00:27:08.605 "state": "online", 00:27:08.605 "raid_level": "raid5f", 00:27:08.605 "superblock": true, 00:27:08.605 "num_base_bdevs": 4, 00:27:08.605 "num_base_bdevs_discovered": 4, 00:27:08.605 "num_base_bdevs_operational": 4, 00:27:08.605 "base_bdevs_list": [ 00:27:08.605 { 00:27:08.605 "name": "BaseBdev1", 00:27:08.605 "uuid": "588f793a-8e04-51f0-88cb-a3c351510402", 00:27:08.605 "is_configured": true, 00:27:08.605 "data_offset": 2048, 00:27:08.605 "data_size": 63488 00:27:08.605 }, 00:27:08.605 { 00:27:08.605 "name": "BaseBdev2", 00:27:08.605 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:08.605 "is_configured": true, 00:27:08.605 "data_offset": 2048, 00:27:08.605 "data_size": 63488 00:27:08.605 }, 00:27:08.605 { 00:27:08.605 "name": "BaseBdev3", 00:27:08.605 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:08.605 "is_configured": true, 00:27:08.605 "data_offset": 2048, 00:27:08.605 "data_size": 63488 00:27:08.605 }, 00:27:08.605 { 00:27:08.605 "name": "BaseBdev4", 00:27:08.605 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:08.605 "is_configured": true, 00:27:08.605 "data_offset": 2048, 00:27:08.605 "data_size": 63488 00:27:08.605 } 00:27:08.605 ] 00:27:08.605 }' 00:27:08.605 05:09:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:08.605 05:09:38 -- common/autotest_common.sh@10 -- # set +x 00:27:09.173 05:09:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.173 05:09:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:09.432 [2024-04-27 05:09:39.179852] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.432 05:09:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:27:09.432 05:09:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:09.432 05:09:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.691 05:09:39 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:27:09.691 05:09:39 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:09.691 05:09:39 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:09.691 05:09:39 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@12 -- # local i 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:09.691 05:09:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:09.951 [2024-04-27 05:09:39.631772] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:09.951 /dev/nbd0 00:27:09.951 05:09:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:09.951 05:09:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:09.951 05:09:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:09.951 05:09:39 -- common/autotest_common.sh@857 -- # local i 00:27:09.951 05:09:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:09.951 05:09:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:09.951 05:09:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:09.951 05:09:39 -- common/autotest_common.sh@861 -- # break 00:27:09.951 05:09:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:09.951 05:09:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:09.951 05:09:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:09.951 1+0 records in 00:27:09.951 1+0 records out 00:27:09.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365133 s, 11.2 MB/s 00:27:09.951 05:09:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.951 05:09:39 -- common/autotest_common.sh@874 -- # size=4096 00:27:09.951 05:09:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:09.951 05:09:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:09.951 05:09:39 -- common/autotest_common.sh@877 -- # return 0 00:27:09.951 05:09:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:09.951 05:09:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:09.951 05:09:39 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:09.951 05:09:39 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:27:09.951 05:09:39 -- bdev/bdev_raid.sh@582 -- # echo 192 00:27:09.951 05:09:39 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:27:10.518 496+0 records in 00:27:10.518 496+0 records out 00:27:10.518 97517568 bytes (98 MB, 93 MiB) copied, 0.534835 s, 182 MB/s 00:27:10.518 05:09:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@51 -- # local i 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:10.518 05:09:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:10.777 [2024-04-27 05:09:40.481515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@41 -- # break 00:27:10.777 05:09:40 -- bdev/nbd_common.sh@45 -- # return 0 00:27:10.777 05:09:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:11.036 [2024-04-27 05:09:40.697092] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.036 05:09:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.295 05:09:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:11.295 "name": "raid_bdev1", 00:27:11.295 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:11.295 "strip_size_kb": 64, 00:27:11.295 "state": "online", 00:27:11.295 "raid_level": "raid5f", 00:27:11.295 "superblock": true, 00:27:11.295 "num_base_bdevs": 4, 00:27:11.295 "num_base_bdevs_discovered": 3, 00:27:11.295 "num_base_bdevs_operational": 3, 00:27:11.295 "base_bdevs_list": [ 00:27:11.295 { 00:27:11.295 "name": null, 00:27:11.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.295 "is_configured": false, 00:27:11.295 "data_offset": 2048, 00:27:11.295 "data_size": 63488 00:27:11.295 }, 00:27:11.295 { 00:27:11.295 "name": "BaseBdev2", 00:27:11.295 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:11.295 "is_configured": true, 00:27:11.295 "data_offset": 2048, 00:27:11.295 "data_size": 63488 00:27:11.295 }, 00:27:11.295 { 00:27:11.295 "name": "BaseBdev3", 00:27:11.295 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:11.295 "is_configured": true, 00:27:11.295 "data_offset": 2048, 00:27:11.295 "data_size": 63488 00:27:11.295 }, 00:27:11.295 { 00:27:11.295 "name": "BaseBdev4", 00:27:11.295 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:11.295 "is_configured": true, 00:27:11.295 "data_offset": 2048, 00:27:11.295 "data_size": 63488 00:27:11.295 } 00:27:11.295 ] 00:27:11.295 }' 00:27:11.295 05:09:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:11.295 05:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:11.861 05:09:41 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:12.120 [2024-04-27 05:09:41.877618] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:12.120 [2024-04-27 05:09:41.877978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.120 [2024-04-27 05:09:41.883733] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:27:12.120 [2024-04-27 05:09:41.886941] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:12.120 05:09:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.055 05:09:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.313 05:09:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:13.313 "name": "raid_bdev1", 00:27:13.313 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:13.313 "strip_size_kb": 64, 00:27:13.313 "state": "online", 00:27:13.313 "raid_level": "raid5f", 00:27:13.313 "superblock": true, 00:27:13.313 "num_base_bdevs": 4, 00:27:13.313 "num_base_bdevs_discovered": 4, 00:27:13.313 "num_base_bdevs_operational": 4, 00:27:13.313 "process": { 00:27:13.313 "type": "rebuild", 00:27:13.313 "target": "spare", 00:27:13.313 "progress": { 00:27:13.313 "blocks": 23040, 00:27:13.313 "percent": 12 00:27:13.313 } 00:27:13.313 }, 00:27:13.313 "base_bdevs_list": [ 00:27:13.313 { 00:27:13.313 "name": "spare", 00:27:13.313 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:13.313 "is_configured": true, 00:27:13.313 "data_offset": 2048, 00:27:13.313 "data_size": 63488 00:27:13.313 }, 00:27:13.313 { 00:27:13.313 "name": "BaseBdev2", 00:27:13.313 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:13.313 "is_configured": true, 00:27:13.313 "data_offset": 2048, 00:27:13.313 "data_size": 63488 00:27:13.313 }, 00:27:13.313 { 00:27:13.313 "name": "BaseBdev3", 00:27:13.313 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:13.313 "is_configured": true, 00:27:13.313 "data_offset": 2048, 00:27:13.313 "data_size": 63488 00:27:13.313 }, 00:27:13.313 { 00:27:13.313 "name": "BaseBdev4", 00:27:13.313 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:13.313 "is_configured": true, 00:27:13.313 "data_offset": 2048, 00:27:13.313 "data_size": 63488 00:27:13.313 } 00:27:13.313 ] 00:27:13.313 }' 00:27:13.313 05:09:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:13.313 05:09:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.313 05:09:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:13.571 05:09:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.571 05:09:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:13.571 [2024-04-27 05:09:43.481463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.828 [2024-04-27 05:09:43.503695] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:13.828 [2024-04-27 05:09:43.503929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.828 05:09:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.086 05:09:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:14.086 "name": "raid_bdev1", 00:27:14.086 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:14.086 "strip_size_kb": 64, 00:27:14.086 "state": "online", 00:27:14.086 "raid_level": "raid5f", 00:27:14.086 "superblock": true, 00:27:14.086 "num_base_bdevs": 4, 00:27:14.086 "num_base_bdevs_discovered": 3, 00:27:14.086 "num_base_bdevs_operational": 3, 00:27:14.086 "base_bdevs_list": [ 00:27:14.086 { 00:27:14.086 "name": null, 00:27:14.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.086 "is_configured": false, 00:27:14.086 "data_offset": 2048, 00:27:14.086 "data_size": 63488 00:27:14.086 }, 00:27:14.086 { 00:27:14.086 "name": "BaseBdev2", 00:27:14.086 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:14.086 "is_configured": true, 00:27:14.086 "data_offset": 2048, 00:27:14.086 "data_size": 63488 00:27:14.086 }, 00:27:14.086 { 00:27:14.086 "name": "BaseBdev3", 00:27:14.086 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:14.086 "is_configured": true, 00:27:14.086 "data_offset": 2048, 00:27:14.086 "data_size": 63488 00:27:14.086 }, 00:27:14.086 { 00:27:14.086 "name": "BaseBdev4", 00:27:14.086 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:14.086 "is_configured": true, 00:27:14.086 "data_offset": 2048, 00:27:14.086 "data_size": 63488 00:27:14.086 } 00:27:14.086 ] 00:27:14.086 }' 00:27:14.086 05:09:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:14.086 05:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.652 05:09:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:14.911 "name": "raid_bdev1", 00:27:14.911 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:14.911 "strip_size_kb": 64, 00:27:14.911 "state": "online", 00:27:14.911 "raid_level": "raid5f", 00:27:14.911 "superblock": true, 00:27:14.911 "num_base_bdevs": 4, 00:27:14.911 "num_base_bdevs_discovered": 3, 00:27:14.911 "num_base_bdevs_operational": 3, 00:27:14.911 "base_bdevs_list": [ 00:27:14.911 { 00:27:14.911 "name": null, 00:27:14.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.911 "is_configured": false, 00:27:14.911 "data_offset": 2048, 00:27:14.911 "data_size": 63488 00:27:14.911 }, 00:27:14.911 { 00:27:14.911 "name": "BaseBdev2", 00:27:14.911 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:14.911 "is_configured": true, 00:27:14.911 "data_offset": 2048, 00:27:14.911 "data_size": 63488 00:27:14.911 }, 00:27:14.911 { 00:27:14.911 "name": "BaseBdev3", 00:27:14.911 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:14.911 "is_configured": true, 00:27:14.911 "data_offset": 2048, 00:27:14.911 "data_size": 63488 00:27:14.911 }, 00:27:14.911 { 00:27:14.911 "name": "BaseBdev4", 00:27:14.911 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:14.911 "is_configured": true, 00:27:14.911 "data_offset": 2048, 00:27:14.911 "data_size": 63488 00:27:14.911 } 00:27:14.911 ] 00:27:14.911 }' 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:14.911 05:09:44 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:15.170 [2024-04-27 05:09:44.917402] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:15.170 [2024-04-27 05:09:44.917670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:15.170 [2024-04-27 05:09:44.923573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:27:15.170 [2024-04-27 05:09:44.926386] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:15.170 05:09:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.104 05:09:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.362 05:09:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:16.362 "name": "raid_bdev1", 00:27:16.362 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:16.362 "strip_size_kb": 64, 00:27:16.362 "state": "online", 00:27:16.362 "raid_level": "raid5f", 00:27:16.362 "superblock": true, 00:27:16.362 "num_base_bdevs": 4, 00:27:16.362 "num_base_bdevs_discovered": 4, 00:27:16.362 "num_base_bdevs_operational": 4, 00:27:16.362 "process": { 00:27:16.362 "type": "rebuild", 00:27:16.362 "target": "spare", 00:27:16.362 "progress": { 00:27:16.362 "blocks": 23040, 00:27:16.362 "percent": 12 00:27:16.362 } 00:27:16.362 }, 00:27:16.362 "base_bdevs_list": [ 00:27:16.362 { 00:27:16.362 "name": "spare", 00:27:16.362 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:16.362 "is_configured": true, 00:27:16.362 "data_offset": 2048, 00:27:16.362 "data_size": 63488 00:27:16.362 }, 00:27:16.362 { 00:27:16.362 "name": "BaseBdev2", 00:27:16.362 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:16.362 "is_configured": true, 00:27:16.362 "data_offset": 2048, 00:27:16.362 "data_size": 63488 00:27:16.362 }, 00:27:16.362 { 00:27:16.362 "name": "BaseBdev3", 00:27:16.362 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:16.362 "is_configured": true, 00:27:16.362 "data_offset": 2048, 00:27:16.362 "data_size": 63488 00:27:16.362 }, 00:27:16.362 { 00:27:16.362 "name": "BaseBdev4", 00:27:16.362 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:16.362 "is_configured": true, 00:27:16.362 "data_offset": 2048, 00:27:16.362 "data_size": 63488 00:27:16.362 } 00:27:16.362 ] 00:27:16.362 }' 00:27:16.362 05:09:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:16.362 05:09:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:16.362 05:09:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:27:16.621 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@657 -- # local timeout=752 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:16.621 "name": "raid_bdev1", 00:27:16.621 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:16.621 "strip_size_kb": 64, 00:27:16.621 "state": "online", 00:27:16.621 "raid_level": "raid5f", 00:27:16.621 "superblock": true, 00:27:16.621 "num_base_bdevs": 4, 00:27:16.621 "num_base_bdevs_discovered": 4, 00:27:16.621 "num_base_bdevs_operational": 4, 00:27:16.621 "process": { 00:27:16.621 "type": "rebuild", 00:27:16.621 "target": "spare", 00:27:16.621 "progress": { 00:27:16.621 "blocks": 28800, 00:27:16.621 "percent": 15 00:27:16.621 } 00:27:16.621 }, 00:27:16.621 "base_bdevs_list": [ 00:27:16.621 { 00:27:16.621 "name": "spare", 00:27:16.621 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:16.621 "is_configured": true, 00:27:16.621 "data_offset": 2048, 00:27:16.621 "data_size": 63488 00:27:16.621 }, 00:27:16.621 { 00:27:16.621 "name": "BaseBdev2", 00:27:16.621 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:16.621 "is_configured": true, 00:27:16.621 "data_offset": 2048, 00:27:16.621 "data_size": 63488 00:27:16.621 }, 00:27:16.621 { 00:27:16.621 "name": "BaseBdev3", 00:27:16.621 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:16.621 "is_configured": true, 00:27:16.621 "data_offset": 2048, 00:27:16.621 "data_size": 63488 00:27:16.621 }, 00:27:16.621 { 00:27:16.621 "name": "BaseBdev4", 00:27:16.621 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:16.621 "is_configured": true, 00:27:16.621 "data_offset": 2048, 00:27:16.621 "data_size": 63488 00:27:16.621 } 00:27:16.621 ] 00:27:16.621 }' 00:27:16.621 05:09:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:16.880 05:09:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:16.880 05:09:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:16.880 05:09:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:16.880 05:09:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.815 05:09:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:18.074 "name": "raid_bdev1", 00:27:18.074 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:18.074 "strip_size_kb": 64, 00:27:18.074 "state": "online", 00:27:18.074 "raid_level": "raid5f", 00:27:18.074 "superblock": true, 00:27:18.074 "num_base_bdevs": 4, 00:27:18.074 "num_base_bdevs_discovered": 4, 00:27:18.074 "num_base_bdevs_operational": 4, 00:27:18.074 "process": { 00:27:18.074 "type": "rebuild", 00:27:18.074 "target": "spare", 00:27:18.074 "progress": { 00:27:18.074 "blocks": 53760, 00:27:18.074 "percent": 28 00:27:18.074 } 00:27:18.074 }, 00:27:18.074 "base_bdevs_list": [ 00:27:18.074 { 00:27:18.074 "name": "spare", 00:27:18.074 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:18.074 "is_configured": true, 00:27:18.074 "data_offset": 2048, 00:27:18.074 "data_size": 63488 00:27:18.074 }, 00:27:18.074 { 00:27:18.074 "name": "BaseBdev2", 00:27:18.074 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:18.074 "is_configured": true, 00:27:18.074 "data_offset": 2048, 00:27:18.074 "data_size": 63488 00:27:18.074 }, 00:27:18.074 { 00:27:18.074 "name": "BaseBdev3", 00:27:18.074 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:18.074 "is_configured": true, 00:27:18.074 "data_offset": 2048, 00:27:18.074 "data_size": 63488 00:27:18.074 }, 00:27:18.074 { 00:27:18.074 "name": "BaseBdev4", 00:27:18.074 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:18.074 "is_configured": true, 00:27:18.074 "data_offset": 2048, 00:27:18.074 "data_size": 63488 00:27:18.074 } 00:27:18.074 ] 00:27:18.074 }' 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:18.074 05:09:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.450 05:09:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:19.450 "name": "raid_bdev1", 00:27:19.450 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:19.450 "strip_size_kb": 64, 00:27:19.450 "state": "online", 00:27:19.450 "raid_level": "raid5f", 00:27:19.450 "superblock": true, 00:27:19.450 "num_base_bdevs": 4, 00:27:19.450 "num_base_bdevs_discovered": 4, 00:27:19.450 "num_base_bdevs_operational": 4, 00:27:19.450 "process": { 00:27:19.450 "type": "rebuild", 00:27:19.450 "target": "spare", 00:27:19.450 "progress": { 00:27:19.450 "blocks": 80640, 00:27:19.450 "percent": 42 00:27:19.450 } 00:27:19.450 }, 00:27:19.450 "base_bdevs_list": [ 00:27:19.450 { 00:27:19.450 "name": "spare", 00:27:19.450 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:19.450 "is_configured": true, 00:27:19.450 "data_offset": 2048, 00:27:19.450 "data_size": 63488 00:27:19.450 }, 00:27:19.450 { 00:27:19.450 "name": "BaseBdev2", 00:27:19.450 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:19.450 "is_configured": true, 00:27:19.450 "data_offset": 2048, 00:27:19.450 "data_size": 63488 00:27:19.450 }, 00:27:19.450 { 00:27:19.450 "name": "BaseBdev3", 00:27:19.450 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:19.450 "is_configured": true, 00:27:19.450 "data_offset": 2048, 00:27:19.450 "data_size": 63488 00:27:19.450 }, 00:27:19.450 { 00:27:19.450 "name": "BaseBdev4", 00:27:19.450 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:19.450 "is_configured": true, 00:27:19.450 "data_offset": 2048, 00:27:19.450 "data_size": 63488 00:27:19.450 } 00:27:19.450 ] 00:27:19.450 }' 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.450 05:09:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:20.832 05:09:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:20.832 05:09:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.832 05:09:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:20.832 05:09:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:20.832 05:09:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:20.833 "name": "raid_bdev1", 00:27:20.833 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:20.833 "strip_size_kb": 64, 00:27:20.833 "state": "online", 00:27:20.833 "raid_level": "raid5f", 00:27:20.833 "superblock": true, 00:27:20.833 "num_base_bdevs": 4, 00:27:20.833 "num_base_bdevs_discovered": 4, 00:27:20.833 "num_base_bdevs_operational": 4, 00:27:20.833 "process": { 00:27:20.833 "type": "rebuild", 00:27:20.833 "target": "spare", 00:27:20.833 "progress": { 00:27:20.833 "blocks": 105600, 00:27:20.833 "percent": 55 00:27:20.833 } 00:27:20.833 }, 00:27:20.833 "base_bdevs_list": [ 00:27:20.833 { 00:27:20.833 "name": "spare", 00:27:20.833 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:20.833 "is_configured": true, 00:27:20.833 "data_offset": 2048, 00:27:20.833 "data_size": 63488 00:27:20.833 }, 00:27:20.833 { 00:27:20.833 "name": "BaseBdev2", 00:27:20.833 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:20.833 "is_configured": true, 00:27:20.833 "data_offset": 2048, 00:27:20.833 "data_size": 63488 00:27:20.833 }, 00:27:20.833 { 00:27:20.833 "name": "BaseBdev3", 00:27:20.833 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:20.833 "is_configured": true, 00:27:20.833 "data_offset": 2048, 00:27:20.833 "data_size": 63488 00:27:20.833 }, 00:27:20.833 { 00:27:20.833 "name": "BaseBdev4", 00:27:20.833 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:20.833 "is_configured": true, 00:27:20.833 "data_offset": 2048, 00:27:20.833 "data_size": 63488 00:27:20.833 } 00:27:20.833 ] 00:27:20.833 }' 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.833 05:09:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:22.207 "name": "raid_bdev1", 00:27:22.207 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:22.207 "strip_size_kb": 64, 00:27:22.207 "state": "online", 00:27:22.207 "raid_level": "raid5f", 00:27:22.207 "superblock": true, 00:27:22.207 "num_base_bdevs": 4, 00:27:22.207 "num_base_bdevs_discovered": 4, 00:27:22.207 "num_base_bdevs_operational": 4, 00:27:22.207 "process": { 00:27:22.207 "type": "rebuild", 00:27:22.207 "target": "spare", 00:27:22.207 "progress": { 00:27:22.207 "blocks": 132480, 00:27:22.207 "percent": 69 00:27:22.207 } 00:27:22.207 }, 00:27:22.207 "base_bdevs_list": [ 00:27:22.207 { 00:27:22.207 "name": "spare", 00:27:22.207 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:22.207 "is_configured": true, 00:27:22.207 "data_offset": 2048, 00:27:22.207 "data_size": 63488 00:27:22.207 }, 00:27:22.207 { 00:27:22.207 "name": "BaseBdev2", 00:27:22.207 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:22.207 "is_configured": true, 00:27:22.207 "data_offset": 2048, 00:27:22.207 "data_size": 63488 00:27:22.207 }, 00:27:22.207 { 00:27:22.207 "name": "BaseBdev3", 00:27:22.207 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:22.207 "is_configured": true, 00:27:22.207 "data_offset": 2048, 00:27:22.207 "data_size": 63488 00:27:22.207 }, 00:27:22.207 { 00:27:22.207 "name": "BaseBdev4", 00:27:22.207 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:22.207 "is_configured": true, 00:27:22.207 "data_offset": 2048, 00:27:22.207 "data_size": 63488 00:27:22.207 } 00:27:22.207 ] 00:27:22.207 }' 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.207 05:09:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:22.207 05:09:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.207 05:09:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.141 05:09:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:23.706 "name": "raid_bdev1", 00:27:23.706 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:23.706 "strip_size_kb": 64, 00:27:23.706 "state": "online", 00:27:23.706 "raid_level": "raid5f", 00:27:23.706 "superblock": true, 00:27:23.706 "num_base_bdevs": 4, 00:27:23.706 "num_base_bdevs_discovered": 4, 00:27:23.706 "num_base_bdevs_operational": 4, 00:27:23.706 "process": { 00:27:23.706 "type": "rebuild", 00:27:23.706 "target": "spare", 00:27:23.706 "progress": { 00:27:23.706 "blocks": 159360, 00:27:23.706 "percent": 83 00:27:23.706 } 00:27:23.706 }, 00:27:23.706 "base_bdevs_list": [ 00:27:23.706 { 00:27:23.706 "name": "spare", 00:27:23.706 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:23.706 "is_configured": true, 00:27:23.706 "data_offset": 2048, 00:27:23.706 "data_size": 63488 00:27:23.706 }, 00:27:23.706 { 00:27:23.706 "name": "BaseBdev2", 00:27:23.706 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:23.706 "is_configured": true, 00:27:23.706 "data_offset": 2048, 00:27:23.706 "data_size": 63488 00:27:23.706 }, 00:27:23.706 { 00:27:23.706 "name": "BaseBdev3", 00:27:23.706 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:23.706 "is_configured": true, 00:27:23.706 "data_offset": 2048, 00:27:23.706 "data_size": 63488 00:27:23.706 }, 00:27:23.706 { 00:27:23.706 "name": "BaseBdev4", 00:27:23.706 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:23.706 "is_configured": true, 00:27:23.706 "data_offset": 2048, 00:27:23.706 "data_size": 63488 00:27:23.706 } 00:27:23.706 ] 00:27:23.706 }' 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.706 05:09:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.642 05:09:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.901 05:09:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:24.901 "name": "raid_bdev1", 00:27:24.901 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:24.901 "strip_size_kb": 64, 00:27:24.901 "state": "online", 00:27:24.901 "raid_level": "raid5f", 00:27:24.901 "superblock": true, 00:27:24.901 "num_base_bdevs": 4, 00:27:24.901 "num_base_bdevs_discovered": 4, 00:27:24.901 "num_base_bdevs_operational": 4, 00:27:24.901 "process": { 00:27:24.901 "type": "rebuild", 00:27:24.901 "target": "spare", 00:27:24.901 "progress": { 00:27:24.901 "blocks": 186240, 00:27:24.901 "percent": 97 00:27:24.901 } 00:27:24.901 }, 00:27:24.901 "base_bdevs_list": [ 00:27:24.901 { 00:27:24.901 "name": "spare", 00:27:24.901 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:24.901 "is_configured": true, 00:27:24.901 "data_offset": 2048, 00:27:24.901 "data_size": 63488 00:27:24.901 }, 00:27:24.901 { 00:27:24.901 "name": "BaseBdev2", 00:27:24.901 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:24.901 "is_configured": true, 00:27:24.901 "data_offset": 2048, 00:27:24.901 "data_size": 63488 00:27:24.901 }, 00:27:24.901 { 00:27:24.901 "name": "BaseBdev3", 00:27:24.901 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:24.901 "is_configured": true, 00:27:24.901 "data_offset": 2048, 00:27:24.901 "data_size": 63488 00:27:24.901 }, 00:27:24.901 { 00:27:24.901 "name": "BaseBdev4", 00:27:24.901 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:24.901 "is_configured": true, 00:27:24.901 "data_offset": 2048, 00:27:24.901 "data_size": 63488 00:27:24.901 } 00:27:24.901 ] 00:27:24.901 }' 00:27:24.901 05:09:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:25.160 05:09:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.160 05:09:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:25.160 05:09:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.160 05:09:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:25.160 [2024-04-27 05:09:55.023355] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:25.160 [2024-04-27 05:09:55.023555] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:25.160 [2024-04-27 05:09:55.023904] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.097 05:09:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:26.356 "name": "raid_bdev1", 00:27:26.356 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:26.356 "strip_size_kb": 64, 00:27:26.356 "state": "online", 00:27:26.356 "raid_level": "raid5f", 00:27:26.356 "superblock": true, 00:27:26.356 "num_base_bdevs": 4, 00:27:26.356 "num_base_bdevs_discovered": 4, 00:27:26.356 "num_base_bdevs_operational": 4, 00:27:26.356 "base_bdevs_list": [ 00:27:26.356 { 00:27:26.356 "name": "spare", 00:27:26.356 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:26.356 "is_configured": true, 00:27:26.356 "data_offset": 2048, 00:27:26.356 "data_size": 63488 00:27:26.356 }, 00:27:26.356 { 00:27:26.356 "name": "BaseBdev2", 00:27:26.356 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:26.356 "is_configured": true, 00:27:26.356 "data_offset": 2048, 00:27:26.356 "data_size": 63488 00:27:26.356 }, 00:27:26.356 { 00:27:26.356 "name": "BaseBdev3", 00:27:26.356 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:26.356 "is_configured": true, 00:27:26.356 "data_offset": 2048, 00:27:26.356 "data_size": 63488 00:27:26.356 }, 00:27:26.356 { 00:27:26.356 "name": "BaseBdev4", 00:27:26.356 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:26.356 "is_configured": true, 00:27:26.356 "data_offset": 2048, 00:27:26.356 "data_size": 63488 00:27:26.356 } 00:27:26.356 ] 00:27:26.356 }' 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@660 -- # break 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:26.356 05:09:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:26.615 05:09:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.615 05:09:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:26.873 "name": "raid_bdev1", 00:27:26.873 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:26.873 "strip_size_kb": 64, 00:27:26.873 "state": "online", 00:27:26.873 "raid_level": "raid5f", 00:27:26.873 "superblock": true, 00:27:26.873 "num_base_bdevs": 4, 00:27:26.873 "num_base_bdevs_discovered": 4, 00:27:26.873 "num_base_bdevs_operational": 4, 00:27:26.873 "base_bdevs_list": [ 00:27:26.873 { 00:27:26.873 "name": "spare", 00:27:26.873 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:26.873 "is_configured": true, 00:27:26.873 "data_offset": 2048, 00:27:26.873 "data_size": 63488 00:27:26.873 }, 00:27:26.873 { 00:27:26.873 "name": "BaseBdev2", 00:27:26.873 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:26.873 "is_configured": true, 00:27:26.873 "data_offset": 2048, 00:27:26.873 "data_size": 63488 00:27:26.873 }, 00:27:26.873 { 00:27:26.873 "name": "BaseBdev3", 00:27:26.873 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:26.873 "is_configured": true, 00:27:26.873 "data_offset": 2048, 00:27:26.873 "data_size": 63488 00:27:26.873 }, 00:27:26.873 { 00:27:26.873 "name": "BaseBdev4", 00:27:26.873 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:26.873 "is_configured": true, 00:27:26.873 "data_offset": 2048, 00:27:26.873 "data_size": 63488 00:27:26.873 } 00:27:26.873 ] 00:27:26.873 }' 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.873 05:09:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.143 05:09:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:27.143 "name": "raid_bdev1", 00:27:27.143 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:27.143 "strip_size_kb": 64, 00:27:27.143 "state": "online", 00:27:27.143 "raid_level": "raid5f", 00:27:27.143 "superblock": true, 00:27:27.143 "num_base_bdevs": 4, 00:27:27.143 "num_base_bdevs_discovered": 4, 00:27:27.143 "num_base_bdevs_operational": 4, 00:27:27.143 "base_bdevs_list": [ 00:27:27.143 { 00:27:27.143 "name": "spare", 00:27:27.143 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:27.143 "is_configured": true, 00:27:27.143 "data_offset": 2048, 00:27:27.143 "data_size": 63488 00:27:27.143 }, 00:27:27.143 { 00:27:27.143 "name": "BaseBdev2", 00:27:27.143 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:27.143 "is_configured": true, 00:27:27.143 "data_offset": 2048, 00:27:27.143 "data_size": 63488 00:27:27.143 }, 00:27:27.143 { 00:27:27.143 "name": "BaseBdev3", 00:27:27.143 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:27.143 "is_configured": true, 00:27:27.143 "data_offset": 2048, 00:27:27.143 "data_size": 63488 00:27:27.143 }, 00:27:27.143 { 00:27:27.143 "name": "BaseBdev4", 00:27:27.143 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:27.143 "is_configured": true, 00:27:27.143 "data_offset": 2048, 00:27:27.143 "data_size": 63488 00:27:27.143 } 00:27:27.143 ] 00:27:27.143 }' 00:27:27.143 05:09:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:27.143 05:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:27.712 05:09:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:27.969 [2024-04-27 05:09:57.715220] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:27.969 [2024-04-27 05:09:57.715281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:27.969 [2024-04-27 05:09:57.715420] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:27.969 [2024-04-27 05:09:57.715569] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:27.969 [2024-04-27 05:09:57.715587] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:27.969 05:09:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.969 05:09:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:28.227 05:09:57 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:28.227 05:09:57 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:28.227 05:09:57 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@12 -- # local i 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:28.227 05:09:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:28.486 /dev/nbd0 00:27:28.486 05:09:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:28.486 05:09:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:28.486 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:28.486 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:27:28.486 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:28.486 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:28.486 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:28.486 05:09:58 -- common/autotest_common.sh@861 -- # break 00:27:28.486 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:28.486 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:28.486 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.486 1+0 records in 00:27:28.486 1+0 records out 00:27:28.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035362 s, 11.6 MB/s 00:27:28.486 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.486 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:27:28.486 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.486 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:28.486 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:27:28.486 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:28.486 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:28.486 05:09:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:28.744 /dev/nbd1 00:27:28.744 05:09:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:28.744 05:09:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:28.744 05:09:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:28.744 05:09:58 -- common/autotest_common.sh@857 -- # local i 00:27:28.744 05:09:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:28.744 05:09:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:28.744 05:09:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:29.004 05:09:58 -- common/autotest_common.sh@861 -- # break 00:27:29.004 05:09:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:29.004 05:09:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:29.004 05:09:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:29.004 1+0 records in 00:27:29.004 1+0 records out 00:27:29.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496375 s, 8.3 MB/s 00:27:29.004 05:09:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.004 05:09:58 -- common/autotest_common.sh@874 -- # size=4096 00:27:29.004 05:09:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.004 05:09:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:29.004 05:09:58 -- common/autotest_common.sh@877 -- # return 0 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:29.004 05:09:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:29.004 05:09:58 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@51 -- # local i 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:29.004 05:09:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@41 -- # break 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@45 -- # return 0 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:29.262 05:09:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@41 -- # break 00:27:29.521 05:09:59 -- bdev/nbd_common.sh@45 -- # return 0 00:27:29.521 05:09:59 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:27:29.521 05:09:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:29.521 05:09:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:27:29.521 05:09:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:30.089 05:09:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:30.089 [2024-04-27 05:09:59.967679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:30.089 [2024-04-27 05:09:59.967818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.089 [2024-04-27 05:09:59.967871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:30.089 [2024-04-27 05:09:59.967897] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.089 [2024-04-27 05:09:59.970916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.089 [2024-04-27 05:09:59.971016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:30.089 [2024-04-27 05:09:59.971247] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:30.089 [2024-04-27 05:09:59.971360] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:30.089 BaseBdev1 00:27:30.089 05:09:59 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:30.089 05:09:59 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:27:30.089 05:09:59 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:27:30.657 05:10:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:30.657 [2024-04-27 05:10:00.567956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:30.657 [2024-04-27 05:10:00.568190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.657 [2024-04-27 05:10:00.568261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:30.657 [2024-04-27 05:10:00.568292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.657 [2024-04-27 05:10:00.568866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.657 [2024-04-27 05:10:00.568938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:30.657 [2024-04-27 05:10:00.569048] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:27:30.657 [2024-04-27 05:10:00.569066] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:27:30.657 [2024-04-27 05:10:00.569075] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.657 [2024-04-27 05:10:00.569107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:27:30.657 [2024-04-27 05:10:00.569180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:30.657 BaseBdev2 00:27:30.915 05:10:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:30.915 05:10:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:27:30.915 05:10:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:27:31.174 05:10:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:31.433 [2024-04-27 05:10:01.160178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:31.433 [2024-04-27 05:10:01.160352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.433 [2024-04-27 05:10:01.160405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:31.433 [2024-04-27 05:10:01.160440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.433 [2024-04-27 05:10:01.161043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.433 [2024-04-27 05:10:01.161120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:31.433 [2024-04-27 05:10:01.161242] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:27:31.433 [2024-04-27 05:10:01.161290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:31.433 BaseBdev3 00:27:31.433 05:10:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:31.433 05:10:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:27:31.433 05:10:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:27:31.692 05:10:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:31.951 [2024-04-27 05:10:01.692404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:31.951 [2024-04-27 05:10:01.692542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.951 [2024-04-27 05:10:01.692610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:27:31.951 [2024-04-27 05:10:01.692653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.951 [2024-04-27 05:10:01.693210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.951 [2024-04-27 05:10:01.693281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:31.951 [2024-04-27 05:10:01.693394] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:27:31.951 [2024-04-27 05:10:01.693427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:31.951 BaseBdev4 00:27:31.951 05:10:01 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:32.211 05:10:02 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:32.470 [2024-04-27 05:10:02.292602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:32.470 [2024-04-27 05:10:02.292777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.470 [2024-04-27 05:10:02.292826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:27:32.470 [2024-04-27 05:10:02.292864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.470 [2024-04-27 05:10:02.293571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.470 [2024-04-27 05:10:02.293650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:32.470 [2024-04-27 05:10:02.293787] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:27:32.470 [2024-04-27 05:10:02.293835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:32.470 spare 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.470 05:10:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.729 [2024-04-27 05:10:02.394017] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:27:32.729 [2024-04-27 05:10:02.394076] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:32.729 [2024-04-27 05:10:02.394332] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:27:32.729 [2024-04-27 05:10:02.395437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:27:32.729 [2024-04-27 05:10:02.395462] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:27:32.729 [2024-04-27 05:10:02.395657] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.729 05:10:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:32.729 "name": "raid_bdev1", 00:27:32.729 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:32.729 "strip_size_kb": 64, 00:27:32.729 "state": "online", 00:27:32.729 "raid_level": "raid5f", 00:27:32.729 "superblock": true, 00:27:32.729 "num_base_bdevs": 4, 00:27:32.729 "num_base_bdevs_discovered": 4, 00:27:32.729 "num_base_bdevs_operational": 4, 00:27:32.729 "base_bdevs_list": [ 00:27:32.729 { 00:27:32.729 "name": "spare", 00:27:32.729 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:32.729 "is_configured": true, 00:27:32.729 "data_offset": 2048, 00:27:32.729 "data_size": 63488 00:27:32.729 }, 00:27:32.729 { 00:27:32.729 "name": "BaseBdev2", 00:27:32.729 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:32.729 "is_configured": true, 00:27:32.729 "data_offset": 2048, 00:27:32.729 "data_size": 63488 00:27:32.729 }, 00:27:32.729 { 00:27:32.729 "name": "BaseBdev3", 00:27:32.729 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:32.729 "is_configured": true, 00:27:32.729 "data_offset": 2048, 00:27:32.729 "data_size": 63488 00:27:32.729 }, 00:27:32.729 { 00:27:32.729 "name": "BaseBdev4", 00:27:32.729 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:32.729 "is_configured": true, 00:27:32.729 "data_offset": 2048, 00:27:32.729 "data_size": 63488 00:27:32.729 } 00:27:32.729 ] 00:27:32.729 }' 00:27:32.729 05:10:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:32.729 05:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:33.663 05:10:03 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:33.663 05:10:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:33.664 05:10:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:33.664 05:10:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:33.664 05:10:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:33.664 05:10:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.664 05:10:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:33.921 "name": "raid_bdev1", 00:27:33.921 "uuid": "6af3170b-c92e-4296-a6f6-23cd2baee20f", 00:27:33.921 "strip_size_kb": 64, 00:27:33.921 "state": "online", 00:27:33.921 "raid_level": "raid5f", 00:27:33.921 "superblock": true, 00:27:33.921 "num_base_bdevs": 4, 00:27:33.921 "num_base_bdevs_discovered": 4, 00:27:33.921 "num_base_bdevs_operational": 4, 00:27:33.921 "base_bdevs_list": [ 00:27:33.921 { 00:27:33.921 "name": "spare", 00:27:33.921 "uuid": "3b18e6d0-34b5-5d29-a247-604c27638df2", 00:27:33.921 "is_configured": true, 00:27:33.921 "data_offset": 2048, 00:27:33.921 "data_size": 63488 00:27:33.921 }, 00:27:33.921 { 00:27:33.921 "name": "BaseBdev2", 00:27:33.921 "uuid": "a3270ce9-cdd9-5545-bfe1-7cad9eb72ec7", 00:27:33.921 "is_configured": true, 00:27:33.921 "data_offset": 2048, 00:27:33.921 "data_size": 63488 00:27:33.921 }, 00:27:33.921 { 00:27:33.921 "name": "BaseBdev3", 00:27:33.921 "uuid": "5afe67c5-ad2a-5c5e-a189-f09c6f6f77d8", 00:27:33.921 "is_configured": true, 00:27:33.921 "data_offset": 2048, 00:27:33.921 "data_size": 63488 00:27:33.921 }, 00:27:33.921 { 00:27:33.921 "name": "BaseBdev4", 00:27:33.921 "uuid": "0316c959-ffa5-5039-a993-b87e42d4714f", 00:27:33.921 "is_configured": true, 00:27:33.921 "data_offset": 2048, 00:27:33.921 "data_size": 63488 00:27:33.921 } 00:27:33.921 ] 00:27:33.921 }' 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.921 05:10:03 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:34.178 05:10:03 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:27:34.178 05:10:03 -- bdev/bdev_raid.sh@709 -- # killprocess 144061 00:27:34.178 05:10:03 -- common/autotest_common.sh@926 -- # '[' -z 144061 ']' 00:27:34.178 05:10:03 -- common/autotest_common.sh@930 -- # kill -0 144061 00:27:34.178 05:10:03 -- common/autotest_common.sh@931 -- # uname 00:27:34.178 05:10:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:34.178 05:10:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144061 00:27:34.178 05:10:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:34.178 05:10:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:34.178 05:10:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144061' 00:27:34.178 killing process with pid 144061 00:27:34.178 05:10:04 -- common/autotest_common.sh@945 -- # kill 144061 00:27:34.178 Received shutdown signal, test time was about 60.000000 seconds 00:27:34.178 00:27:34.178 Latency(us) 00:27:34.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.178 =================================================================================================================== 00:27:34.178 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:34.178 [2024-04-27 05:10:04.012187] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:34.178 [2024-04-27 05:10:04.012310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.178 [2024-04-27 05:10:04.012438] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.178 [2024-04-27 05:10:04.012452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:27:34.178 05:10:04 -- common/autotest_common.sh@950 -- # wait 144061 00:27:34.179 [2024-04-27 05:10:04.090820] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.745 05:10:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:34.745 00:27:34.745 real 0m30.209s 00:27:34.745 user 0m46.870s 00:27:34.745 sys 0m3.845s 00:27:34.745 05:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.745 ************************************ 00:27:34.745 END TEST raid5f_rebuild_test_sb 00:27:34.745 ************************************ 00:27:34.745 05:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.745 05:10:04 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:27:34.745 00:27:34.745 real 12m20.143s 00:27:34.745 user 20m55.478s 00:27:34.745 sys 1m46.061s 00:27:34.745 05:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.745 05:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.745 ************************************ 00:27:34.745 END TEST bdev_raid 00:27:34.745 ************************************ 00:27:34.745 05:10:04 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:27:34.745 05:10:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:34.745 05:10:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:34.745 05:10:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.745 ************************************ 00:27:34.745 START TEST bdevperf_config 00:27:34.745 ************************************ 00:27:34.745 05:10:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:27:35.003 * Looking for test storage... 00:27:35.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:27:35.003 05:10:04 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:27:35.003 05:10:04 -- bdevperf/common.sh@8 -- # local job_section=global 00:27:35.003 05:10:04 -- bdevperf/common.sh@9 -- # local rw=read 00:27:35.003 05:10:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:35.003 05:10:04 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:35.003 05:10:04 -- bdevperf/common.sh@13 -- # cat 00:27:35.003 05:10:04 -- bdevperf/common.sh@18 -- # job='[global]' 00:27:35.003 00:27:35.003 05:10:04 -- bdevperf/common.sh@19 -- # echo 00:27:35.003 05:10:04 -- bdevperf/common.sh@20 -- # cat 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@18 -- # create_job job0 00:27:35.003 05:10:04 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:35.003 05:10:04 -- bdevperf/common.sh@9 -- # local rw= 00:27:35.003 05:10:04 -- bdevperf/common.sh@10 -- # local filename= 00:27:35.003 05:10:04 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:35.003 05:10:04 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:35.003 00:27:35.003 05:10:04 -- bdevperf/common.sh@19 -- # echo 00:27:35.003 05:10:04 -- bdevperf/common.sh@20 -- # cat 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@19 -- # create_job job1 00:27:35.003 05:10:04 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:35.003 05:10:04 -- bdevperf/common.sh@9 -- # local rw= 00:27:35.003 05:10:04 -- bdevperf/common.sh@10 -- # local filename= 00:27:35.003 05:10:04 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:35.003 05:10:04 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:35.003 00:27:35.003 05:10:04 -- bdevperf/common.sh@19 -- # echo 00:27:35.003 05:10:04 -- bdevperf/common.sh@20 -- # cat 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@20 -- # create_job job2 00:27:35.003 05:10:04 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:35.003 05:10:04 -- bdevperf/common.sh@9 -- # local rw= 00:27:35.003 05:10:04 -- bdevperf/common.sh@10 -- # local filename= 00:27:35.003 05:10:04 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:35.003 00:27:35.003 05:10:04 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:35.003 05:10:04 -- bdevperf/common.sh@19 -- # echo 00:27:35.003 05:10:04 -- bdevperf/common.sh@20 -- # cat 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@21 -- # create_job job3 00:27:35.003 05:10:04 -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:35.003 05:10:04 -- bdevperf/common.sh@9 -- # local rw= 00:27:35.003 05:10:04 -- bdevperf/common.sh@10 -- # local filename= 00:27:35.003 05:10:04 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:35.003 00:27:35.003 05:10:04 -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:35.003 05:10:04 -- bdevperf/common.sh@19 -- # echo 00:27:35.003 05:10:04 -- bdevperf/common.sh@20 -- # cat 00:27:35.003 05:10:04 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:38.289 05:10:07 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-27 05:10:04.764597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:38.289 [2024-04-27 05:10:04.764872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144838 ] 00:27:38.289 Using job config with 4 jobs 00:27:38.289 [2024-04-27 05:10:04.942656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.289 [2024-04-27 05:10:05.100468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.289 cpumask for '\''job0'\'' is too big 00:27:38.289 cpumask for '\''job1'\'' is too big 00:27:38.289 cpumask for '\''job2'\'' is too big 00:27:38.289 cpumask for '\''job3'\'' is too big 00:27:38.289 Running I/O for 2 seconds... 00:27:38.289 00:27:38.289 Latency(us) 00:27:38.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.01 24673.26 24.09 0.00 0.00 10364.15 2487.39 20614.05 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24697.65 24.12 0.00 0.00 10324.02 2353.34 18230.92 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24679.82 24.10 0.00 0.00 10301.33 2368.23 15728.64 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24660.72 24.08 0.00 0.00 10280.47 2561.86 13345.51 00:27:38.289 =================================================================================================================== 00:27:38.289 Total : 98711.45 96.40 0.00 0.00 10317.43 2353.34 20614.05' 00:27:38.289 05:10:07 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-27 05:10:04.764597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:38.289 [2024-04-27 05:10:04.764872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144838 ] 00:27:38.289 Using job config with 4 jobs 00:27:38.289 [2024-04-27 05:10:04.942656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.289 [2024-04-27 05:10:05.100468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.289 cpumask for '\''job0'\'' is too big 00:27:38.289 cpumask for '\''job1'\'' is too big 00:27:38.289 cpumask for '\''job2'\'' is too big 00:27:38.289 cpumask for '\''job3'\'' is too big 00:27:38.289 Running I/O for 2 seconds... 00:27:38.289 00:27:38.289 Latency(us) 00:27:38.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.01 24673.26 24.09 0.00 0.00 10364.15 2487.39 20614.05 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24697.65 24.12 0.00 0.00 10324.02 2353.34 18230.92 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24679.82 24.10 0.00 0.00 10301.33 2368.23 15728.64 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24660.72 24.08 0.00 0.00 10280.47 2561.86 13345.51 00:27:38.289 =================================================================================================================== 00:27:38.289 Total : 98711.45 96.40 0.00 0.00 10317.43 2353.34 20614.05' 00:27:38.289 05:10:07 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 05:10:04.764597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:38.289 [2024-04-27 05:10:04.764872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144838 ] 00:27:38.289 Using job config with 4 jobs 00:27:38.289 [2024-04-27 05:10:04.942656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.289 [2024-04-27 05:10:05.100468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.289 cpumask for '\''job0'\'' is too big 00:27:38.289 cpumask for '\''job1'\'' is too big 00:27:38.289 cpumask for '\''job2'\'' is too big 00:27:38.289 cpumask for '\''job3'\'' is too big 00:27:38.289 Running I/O for 2 seconds... 00:27:38.289 00:27:38.289 Latency(us) 00:27:38.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.01 24673.26 24.09 0.00 0.00 10364.15 2487.39 20614.05 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24697.65 24.12 0.00 0.00 10324.02 2353.34 18230.92 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24679.82 24.10 0.00 0.00 10301.33 2368.23 15728.64 00:27:38.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:38.289 Malloc0 : 2.02 24660.72 24.08 0.00 0.00 10280.47 2561.86 13345.51 00:27:38.289 =================================================================================================================== 00:27:38.289 Total : 98711.45 96.40 0.00 0.00 10317.43 2353.34 20614.05' 00:27:38.289 05:10:07 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:38.289 05:10:07 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:38.289 05:10:07 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:38.289 05:10:07 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:38.289 [2024-04-27 05:10:07.844886] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:38.289 [2024-04-27 05:10:07.845158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144882 ] 00:27:38.289 [2024-04-27 05:10:08.014179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.290 [2024-04-27 05:10:08.129984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.561 cpumask for 'job0' is too big 00:27:38.561 cpumask for 'job1' is too big 00:27:38.561 cpumask for 'job2' is too big 00:27:38.561 cpumask for 'job3' is too big 00:27:41.102 05:10:10 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:41.102 Running I/O for 2 seconds... 00:27:41.102 00:27:41.102 Latency(us) 00:27:41.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.102 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.102 Malloc0 : 2.01 29488.78 28.80 0.00 0.00 8673.49 1675.64 13345.51 00:27:41.102 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.102 Malloc0 : 2.02 29469.22 28.78 0.00 0.00 8662.46 1563.93 11677.32 00:27:41.102 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.103 Malloc0 : 2.02 29447.97 28.76 0.00 0.00 8653.08 1608.61 11021.96 00:27:41.103 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:41.103 Malloc0 : 2.02 29428.95 28.74 0.00 0.00 8643.20 1839.48 10366.60 00:27:41.103 =================================================================================================================== 00:27:41.103 Total : 117834.92 115.07 0.00 0.00 8658.06 1563.93 13345.51' 00:27:41.103 05:10:10 -- bdevperf/test_config.sh@27 -- # cleanup 00:27:41.103 05:10:10 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:41.103 05:10:10 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:41.103 00:27:41.103 05:10:10 -- bdevperf/common.sh@9 -- # local rw=write 00:27:41.103 05:10:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:41.103 05:10:10 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:41.103 05:10:10 -- bdevperf/common.sh@19 -- # echo 00:27:41.103 05:10:10 -- bdevperf/common.sh@20 -- # cat 00:27:41.103 05:10:10 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:41.103 00:27:41.103 05:10:10 -- bdevperf/common.sh@9 -- # local rw=write 00:27:41.103 05:10:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:41.103 05:10:10 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:41.103 05:10:10 -- bdevperf/common.sh@19 -- # echo 00:27:41.103 05:10:10 -- bdevperf/common.sh@20 -- # cat 00:27:41.103 05:10:10 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:41.103 00:27:41.103 05:10:10 -- bdevperf/common.sh@9 -- # local rw=write 00:27:41.103 05:10:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:41.103 05:10:10 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:41.103 05:10:10 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:41.103 05:10:10 -- bdevperf/common.sh@19 -- # echo 00:27:41.103 05:10:10 -- bdevperf/common.sh@20 -- # cat 00:27:41.103 05:10:10 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:44.392 05:10:13 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-27 05:10:10.851717] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:44.392 [2024-04-27 05:10:10.852001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144921 ] 00:27:44.392 Using job config with 3 jobs 00:27:44.392 [2024-04-27 05:10:11.017887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.392 [2024-04-27 05:10:11.140878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.392 cpumask for '\''job0'\'' is too big 00:27:44.392 cpumask for '\''job1'\'' is too big 00:27:44.392 cpumask for '\''job2'\'' is too big 00:27:44.392 Running I/O for 2 seconds... 00:27:44.392 00:27:44.392 Latency(us) 00:27:44.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40384.86 39.44 0.00 0.00 6332.07 1705.43 9413.35 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40355.52 39.41 0.00 0.00 6325.26 1578.82 7983.48 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40412.22 39.47 0.00 0.00 6305.03 741.00 8519.68 00:27:44.392 =================================================================================================================== 00:27:44.392 Total : 121152.60 118.31 0.00 0.00 6320.77 741.00 9413.35' 00:27:44.392 05:10:13 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-27 05:10:10.851717] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:44.392 [2024-04-27 05:10:10.852001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144921 ] 00:27:44.392 Using job config with 3 jobs 00:27:44.392 [2024-04-27 05:10:11.017887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.392 [2024-04-27 05:10:11.140878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.392 cpumask for '\''job0'\'' is too big 00:27:44.392 cpumask for '\''job1'\'' is too big 00:27:44.392 cpumask for '\''job2'\'' is too big 00:27:44.392 Running I/O for 2 seconds... 00:27:44.392 00:27:44.392 Latency(us) 00:27:44.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40384.86 39.44 0.00 0.00 6332.07 1705.43 9413.35 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40355.52 39.41 0.00 0.00 6325.26 1578.82 7983.48 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40412.22 39.47 0.00 0.00 6305.03 741.00 8519.68 00:27:44.392 =================================================================================================================== 00:27:44.392 Total : 121152.60 118.31 0.00 0.00 6320.77 741.00 9413.35' 00:27:44.392 05:10:13 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:44.392 05:10:13 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 05:10:10.851717] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:44.392 [2024-04-27 05:10:10.852001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144921 ] 00:27:44.392 Using job config with 3 jobs 00:27:44.392 [2024-04-27 05:10:11.017887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.392 [2024-04-27 05:10:11.140878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.392 cpumask for '\''job0'\'' is too big 00:27:44.392 cpumask for '\''job1'\'' is too big 00:27:44.392 cpumask for '\''job2'\'' is too big 00:27:44.392 Running I/O for 2 seconds... 00:27:44.392 00:27:44.392 Latency(us) 00:27:44.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40384.86 39.44 0.00 0.00 6332.07 1705.43 9413.35 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40355.52 39.41 0.00 0.00 6325.26 1578.82 7983.48 00:27:44.392 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:44.392 Malloc0 : 2.01 40412.22 39.47 0.00 0.00 6305.03 741.00 8519.68 00:27:44.392 =================================================================================================================== 00:27:44.393 Total : 121152.60 118.31 0.00 0.00 6320.77 741.00 9413.35' 00:27:44.393 05:10:13 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@35 -- # cleanup 00:27:44.393 05:10:13 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:44.393 05:10:13 -- bdevperf/common.sh@8 -- # local job_section=global 00:27:44.393 05:10:13 -- bdevperf/common.sh@9 -- # local rw=rw 00:27:44.393 05:10:13 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:44.393 05:10:13 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:44.393 05:10:13 -- bdevperf/common.sh@13 -- # cat 00:27:44.393 05:10:13 -- bdevperf/common.sh@18 -- # job='[global]' 00:27:44.393 00:27:44.393 05:10:13 -- bdevperf/common.sh@19 -- # echo 00:27:44.393 05:10:13 -- bdevperf/common.sh@20 -- # cat 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:44.393 05:10:13 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:44.393 05:10:13 -- bdevperf/common.sh@9 -- # local rw= 00:27:44.393 05:10:13 -- bdevperf/common.sh@10 -- # local filename= 00:27:44.393 05:10:13 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:44.393 00:27:44.393 05:10:13 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:44.393 05:10:13 -- bdevperf/common.sh@19 -- # echo 00:27:44.393 05:10:13 -- bdevperf/common.sh@20 -- # cat 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:44.393 05:10:13 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:44.393 05:10:13 -- bdevperf/common.sh@9 -- # local rw= 00:27:44.393 05:10:13 -- bdevperf/common.sh@10 -- # local filename= 00:27:44.393 05:10:13 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:44.393 05:10:13 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:44.393 00:27:44.393 05:10:13 -- bdevperf/common.sh@19 -- # echo 00:27:44.393 05:10:13 -- bdevperf/common.sh@20 -- # cat 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:44.393 05:10:13 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:44.393 05:10:13 -- bdevperf/common.sh@9 -- # local rw= 00:27:44.393 05:10:13 -- bdevperf/common.sh@10 -- # local filename= 00:27:44.393 05:10:13 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:44.393 05:10:13 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:44.393 00:27:44.393 05:10:13 -- bdevperf/common.sh@19 -- # echo 00:27:44.393 05:10:13 -- bdevperf/common.sh@20 -- # cat 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:44.393 05:10:13 -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:44.393 05:10:13 -- bdevperf/common.sh@9 -- # local rw= 00:27:44.393 05:10:13 -- bdevperf/common.sh@10 -- # local filename= 00:27:44.393 05:10:13 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:44.393 05:10:13 -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:44.393 00:27:44.393 05:10:13 -- bdevperf/common.sh@19 -- # echo 00:27:44.393 05:10:13 -- bdevperf/common.sh@20 -- # cat 00:27:44.393 05:10:13 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:46.936 05:10:16 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-27 05:10:13.862536] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:46.936 [2024-04-27 05:10:13.862764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144972 ] 00:27:46.936 Using job config with 4 jobs 00:27:46.936 [2024-04-27 05:10:14.016852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.936 [2024-04-27 05:10:14.130862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.936 cpumask for '\''job0'\'' is too big 00:27:46.936 cpumask for '\''job1'\'' is too big 00:27:46.936 cpumask for '\''job2'\'' is too big 00:27:46.936 cpumask for '\''job3'\'' is too big 00:27:46.936 Running I/O for 2 seconds... 00:27:46.936 00:27:46.936 Latency(us) 00:27:46.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.02 14188.65 13.86 0.00 0.00 18027.54 3574.69 28001.75 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.02 14178.10 13.85 0.00 0.00 18026.59 4081.11 27882.59 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14205.63 13.87 0.00 0.00 17934.58 3366.17 24546.21 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14195.04 13.86 0.00 0.00 17933.57 3753.43 24427.05 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14185.02 13.85 0.00 0.00 17890.70 3515.11 20852.36 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14174.58 13.84 0.00 0.00 17891.02 3798.11 20971.52 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14163.98 13.83 0.00 0.00 17850.19 3470.43 20256.58 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14153.67 13.82 0.00 0.00 17847.84 3902.37 20018.27 00:27:46.936 =================================================================================================================== 00:27:46.936 Total : 113444.67 110.79 0.00 0.00 17925.03 3366.17 28001.75' 00:27:46.936 05:10:16 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-27 05:10:13.862536] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:46.936 [2024-04-27 05:10:13.862764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144972 ] 00:27:46.936 Using job config with 4 jobs 00:27:46.936 [2024-04-27 05:10:14.016852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.936 [2024-04-27 05:10:14.130862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.936 cpumask for '\''job0'\'' is too big 00:27:46.936 cpumask for '\''job1'\'' is too big 00:27:46.936 cpumask for '\''job2'\'' is too big 00:27:46.936 cpumask for '\''job3'\'' is too big 00:27:46.936 Running I/O for 2 seconds... 00:27:46.936 00:27:46.936 Latency(us) 00:27:46.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.02 14188.65 13.86 0.00 0.00 18027.54 3574.69 28001.75 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.02 14178.10 13.85 0.00 0.00 18026.59 4081.11 27882.59 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14205.63 13.87 0.00 0.00 17934.58 3366.17 24546.21 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14195.04 13.86 0.00 0.00 17933.57 3753.43 24427.05 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14185.02 13.85 0.00 0.00 17890.70 3515.11 20852.36 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14174.58 13.84 0.00 0.00 17891.02 3798.11 20971.52 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14163.98 13.83 0.00 0.00 17850.19 3470.43 20256.58 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14153.67 13.82 0.00 0.00 17847.84 3902.37 20018.27 00:27:46.936 =================================================================================================================== 00:27:46.936 Total : 113444.67 110.79 0.00 0.00 17925.03 3366.17 28001.75' 00:27:46.936 05:10:16 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 05:10:13.862536] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:46.936 [2024-04-27 05:10:13.862764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144972 ] 00:27:46.936 Using job config with 4 jobs 00:27:46.936 [2024-04-27 05:10:14.016852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.936 [2024-04-27 05:10:14.130862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.936 cpumask for '\''job0'\'' is too big 00:27:46.936 cpumask for '\''job1'\'' is too big 00:27:46.936 cpumask for '\''job2'\'' is too big 00:27:46.936 cpumask for '\''job3'\'' is too big 00:27:46.936 Running I/O for 2 seconds... 00:27:46.936 00:27:46.936 Latency(us) 00:27:46.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.02 14188.65 13.86 0.00 0.00 18027.54 3574.69 28001.75 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.02 14178.10 13.85 0.00 0.00 18026.59 4081.11 27882.59 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14205.63 13.87 0.00 0.00 17934.58 3366.17 24546.21 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14195.04 13.86 0.00 0.00 17933.57 3753.43 24427.05 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14185.02 13.85 0.00 0.00 17890.70 3515.11 20852.36 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc1 : 2.04 14174.58 13.84 0.00 0.00 17891.02 3798.11 20971.52 00:27:46.936 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.936 Malloc0 : 2.04 14163.98 13.83 0.00 0.00 17850.19 3470.43 20256.58 00:27:46.936 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:46.937 Malloc1 : 2.04 14153.67 13.82 0.00 0.00 17847.84 3902.37 20018.27 00:27:46.937 =================================================================================================================== 00:27:46.937 Total : 113444.67 110.79 0.00 0.00 17925.03 3366.17 28001.75' 00:27:46.937 05:10:16 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:46.937 05:10:16 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:46.937 05:10:16 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:46.937 05:10:16 -- bdevperf/test_config.sh@44 -- # cleanup 00:27:46.937 05:10:16 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:46.937 05:10:16 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:46.937 00:27:46.937 real 0m12.226s 00:27:46.937 user 0m10.345s 00:27:46.937 sys 0m1.320s 00:27:46.937 05:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.937 ************************************ 00:27:46.937 END TEST bdevperf_config 00:27:46.937 ************************************ 00:27:46.937 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:27:47.214 05:10:16 -- spdk/autotest.sh@198 -- # uname -s 00:27:47.214 05:10:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:27:47.214 05:10:16 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:47.214 05:10:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.214 05:10:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.214 05:10:16 -- common/autotest_common.sh@10 -- # set +x 00:27:47.214 ************************************ 00:27:47.214 START TEST reactor_set_interrupt 00:27:47.214 ************************************ 00:27:47.214 05:10:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:47.214 * Looking for test storage... 00:27:47.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.214 05:10:16 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:47.214 05:10:16 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:47.214 05:10:16 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:47.214 05:10:16 -- common/autotest_common.sh@34 -- # set -e 00:27:47.214 05:10:16 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:47.214 05:10:16 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:47.214 05:10:16 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:47.214 05:10:16 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:47.214 05:10:16 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:47.214 05:10:16 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:27:47.215 05:10:16 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:27:47.215 05:10:16 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:47.215 05:10:16 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:47.215 05:10:16 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:47.215 05:10:16 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:47.215 05:10:16 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:47.215 05:10:16 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:47.215 05:10:16 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:47.215 05:10:16 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:47.215 05:10:16 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:47.215 05:10:16 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:47.215 05:10:16 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:47.215 05:10:16 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:47.215 05:10:16 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:47.215 05:10:16 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:47.215 05:10:16 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:47.215 05:10:16 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:47.215 05:10:16 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:47.215 05:10:16 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:47.215 05:10:16 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:47.215 05:10:16 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:47.215 05:10:16 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:47.215 05:10:16 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:47.215 05:10:16 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:47.215 05:10:16 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:47.215 05:10:16 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:27:47.215 05:10:16 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:47.215 05:10:16 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:47.215 05:10:16 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:47.215 05:10:16 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:47.215 05:10:16 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:47.215 05:10:16 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:47.215 05:10:16 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:47.215 05:10:16 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:27:47.215 05:10:16 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:27:47.215 05:10:16 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:47.215 05:10:16 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:47.215 05:10:16 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:47.215 05:10:16 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:27:47.215 05:10:16 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:47.215 05:10:16 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:27:47.215 05:10:16 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:47.215 05:10:16 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:47.215 05:10:16 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:27:47.215 05:10:16 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:27:47.215 05:10:16 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:47.215 05:10:16 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:27:47.215 05:10:16 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:27:47.215 05:10:16 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:27:47.215 05:10:16 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:27:47.215 05:10:16 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:27:47.215 05:10:16 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:27:47.215 05:10:16 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:27:47.215 05:10:16 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:27:47.215 05:10:16 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:27:47.215 05:10:16 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:27:47.215 05:10:16 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:27:47.215 05:10:16 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:27:47.215 05:10:16 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:47.215 05:10:16 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:27:47.215 05:10:16 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:27:47.215 05:10:16 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:27:47.215 05:10:16 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:27:47.215 05:10:16 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:47.215 05:10:16 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:27:47.215 05:10:16 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:27:47.215 05:10:16 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:27:47.215 05:10:16 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:27:47.215 05:10:16 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:27:47.215 05:10:16 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:27:47.215 05:10:16 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:27:47.215 05:10:16 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:27:47.215 05:10:16 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:27:47.215 05:10:16 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:27:47.215 05:10:16 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:47.215 05:10:16 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:27:47.215 05:10:16 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:47.215 05:10:16 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:47.215 05:10:16 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:47.215 05:10:16 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:47.215 05:10:16 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:47.215 05:10:16 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:47.215 05:10:16 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:47.215 05:10:16 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:47.215 05:10:16 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:47.215 05:10:16 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:47.215 05:10:16 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:47.215 05:10:16 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:47.215 05:10:16 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:47.215 05:10:16 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:47.215 05:10:16 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:47.215 05:10:16 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:47.215 05:10:16 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:47.215 #define SPDK_CONFIG_H 00:27:47.215 #define SPDK_CONFIG_APPS 1 00:27:47.215 #define SPDK_CONFIG_ARCH native 00:27:47.215 #define SPDK_CONFIG_ASAN 1 00:27:47.215 #undef SPDK_CONFIG_AVAHI 00:27:47.215 #undef SPDK_CONFIG_CET 00:27:47.215 #define SPDK_CONFIG_COVERAGE 1 00:27:47.215 #define SPDK_CONFIG_CROSS_PREFIX 00:27:47.215 #undef SPDK_CONFIG_CRYPTO 00:27:47.215 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:47.215 #undef SPDK_CONFIG_CUSTOMOCF 00:27:47.215 #undef SPDK_CONFIG_DAOS 00:27:47.215 #define SPDK_CONFIG_DAOS_DIR 00:27:47.215 #define SPDK_CONFIG_DEBUG 1 00:27:47.215 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:47.215 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:27:47.215 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:27:47.215 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:27:47.215 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:47.215 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:47.215 #define SPDK_CONFIG_EXAMPLES 1 00:27:47.215 #undef SPDK_CONFIG_FC 00:27:47.215 #define SPDK_CONFIG_FC_PATH 00:27:47.215 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:47.215 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:47.215 #undef SPDK_CONFIG_FUSE 00:27:47.215 #undef SPDK_CONFIG_FUZZER 00:27:47.215 #define SPDK_CONFIG_FUZZER_LIB 00:27:47.215 #undef SPDK_CONFIG_GOLANG 00:27:47.215 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:47.215 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:47.215 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:47.215 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:47.215 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:47.215 #define SPDK_CONFIG_IDXD 1 00:27:47.215 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:47.215 #undef SPDK_CONFIG_IPSEC_MB 00:27:47.215 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:47.215 #define SPDK_CONFIG_ISAL 1 00:27:47.215 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:47.215 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:47.215 #define SPDK_CONFIG_LIBDIR 00:27:47.215 #undef SPDK_CONFIG_LTO 00:27:47.215 #define SPDK_CONFIG_MAX_LCORES 00:27:47.215 #define SPDK_CONFIG_NVME_CUSE 1 00:27:47.215 #undef SPDK_CONFIG_OCF 00:27:47.215 #define SPDK_CONFIG_OCF_PATH 00:27:47.215 #define SPDK_CONFIG_OPENSSL_PATH 00:27:47.215 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:47.215 #undef SPDK_CONFIG_PGO_USE 00:27:47.215 #define SPDK_CONFIG_PREFIX /usr/local 00:27:47.215 #define SPDK_CONFIG_RAID5F 1 00:27:47.215 #undef SPDK_CONFIG_RBD 00:27:47.215 #define SPDK_CONFIG_RDMA 1 00:27:47.215 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:47.215 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:47.215 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:47.215 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:47.215 #undef SPDK_CONFIG_SHARED 00:27:47.215 #undef SPDK_CONFIG_SMA 00:27:47.215 #define SPDK_CONFIG_TESTS 1 00:27:47.215 #undef SPDK_CONFIG_TSAN 00:27:47.215 #undef SPDK_CONFIG_UBLK 00:27:47.215 #define SPDK_CONFIG_UBSAN 1 00:27:47.215 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:47.215 #undef SPDK_CONFIG_URING 00:27:47.215 #define SPDK_CONFIG_URING_PATH 00:27:47.215 #undef SPDK_CONFIG_URING_ZNS 00:27:47.215 #undef SPDK_CONFIG_USDT 00:27:47.215 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:47.215 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:47.215 #undef SPDK_CONFIG_VFIO_USER 00:27:47.215 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:47.215 #define SPDK_CONFIG_VHOST 1 00:27:47.215 #define SPDK_CONFIG_VIRTIO 1 00:27:47.215 #undef SPDK_CONFIG_VTUNE 00:27:47.215 #define SPDK_CONFIG_VTUNE_DIR 00:27:47.215 #define SPDK_CONFIG_WERROR 1 00:27:47.215 #define SPDK_CONFIG_WPDK_DIR 00:27:47.215 #undef SPDK_CONFIG_XNVME 00:27:47.215 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:47.216 05:10:16 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:47.216 05:10:16 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.216 05:10:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.216 05:10:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.216 05:10:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.216 05:10:16 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:47.216 05:10:16 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:47.216 05:10:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:47.216 05:10:16 -- paths/export.sh@5 -- # export PATH 00:27:47.216 05:10:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:47.216 05:10:16 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:47.216 05:10:16 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:47.216 05:10:16 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:47.216 05:10:16 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:47.216 05:10:17 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:47.216 05:10:17 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:47.216 05:10:17 -- pm/common@16 -- # TEST_TAG=N/A 00:27:47.216 05:10:17 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:47.216 05:10:17 -- common/autotest_common.sh@52 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:47.216 05:10:17 -- common/autotest_common.sh@56 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:47.216 05:10:17 -- common/autotest_common.sh@58 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:47.216 05:10:17 -- common/autotest_common.sh@60 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:47.216 05:10:17 -- common/autotest_common.sh@62 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:47.216 05:10:17 -- common/autotest_common.sh@64 -- # : 00:27:47.216 05:10:17 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:47.216 05:10:17 -- common/autotest_common.sh@66 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:47.216 05:10:17 -- common/autotest_common.sh@68 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:47.216 05:10:17 -- common/autotest_common.sh@70 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:47.216 05:10:17 -- common/autotest_common.sh@72 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:47.216 05:10:17 -- common/autotest_common.sh@74 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:47.216 05:10:17 -- common/autotest_common.sh@76 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:47.216 05:10:17 -- common/autotest_common.sh@78 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:47.216 05:10:17 -- common/autotest_common.sh@80 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:47.216 05:10:17 -- common/autotest_common.sh@82 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:47.216 05:10:17 -- common/autotest_common.sh@84 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:47.216 05:10:17 -- common/autotest_common.sh@86 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:47.216 05:10:17 -- common/autotest_common.sh@88 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:47.216 05:10:17 -- common/autotest_common.sh@90 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:47.216 05:10:17 -- common/autotest_common.sh@92 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:47.216 05:10:17 -- common/autotest_common.sh@94 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:47.216 05:10:17 -- common/autotest_common.sh@96 -- # : rdma 00:27:47.216 05:10:17 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:47.216 05:10:17 -- common/autotest_common.sh@98 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:47.216 05:10:17 -- common/autotest_common.sh@100 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:47.216 05:10:17 -- common/autotest_common.sh@102 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:47.216 05:10:17 -- common/autotest_common.sh@104 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:47.216 05:10:17 -- common/autotest_common.sh@106 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:47.216 05:10:17 -- common/autotest_common.sh@108 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:47.216 05:10:17 -- common/autotest_common.sh@110 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:47.216 05:10:17 -- common/autotest_common.sh@112 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:47.216 05:10:17 -- common/autotest_common.sh@114 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:47.216 05:10:17 -- common/autotest_common.sh@116 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:47.216 05:10:17 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:27:47.216 05:10:17 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:47.216 05:10:17 -- common/autotest_common.sh@120 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:47.216 05:10:17 -- common/autotest_common.sh@122 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:47.216 05:10:17 -- common/autotest_common.sh@124 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:47.216 05:10:17 -- common/autotest_common.sh@126 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:47.216 05:10:17 -- common/autotest_common.sh@128 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:47.216 05:10:17 -- common/autotest_common.sh@130 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:47.216 05:10:17 -- common/autotest_common.sh@132 -- # : v23.11 00:27:47.216 05:10:17 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:47.216 05:10:17 -- common/autotest_common.sh@134 -- # : true 00:27:47.216 05:10:17 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:47.216 05:10:17 -- common/autotest_common.sh@136 -- # : 1 00:27:47.216 05:10:17 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:47.216 05:10:17 -- common/autotest_common.sh@138 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:47.216 05:10:17 -- common/autotest_common.sh@140 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:47.216 05:10:17 -- common/autotest_common.sh@142 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:47.216 05:10:17 -- common/autotest_common.sh@144 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:47.216 05:10:17 -- common/autotest_common.sh@146 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:47.216 05:10:17 -- common/autotest_common.sh@148 -- # : 00:27:47.216 05:10:17 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:47.216 05:10:17 -- common/autotest_common.sh@150 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:47.216 05:10:17 -- common/autotest_common.sh@152 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:47.216 05:10:17 -- common/autotest_common.sh@154 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:47.216 05:10:17 -- common/autotest_common.sh@156 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:47.216 05:10:17 -- common/autotest_common.sh@158 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:47.216 05:10:17 -- common/autotest_common.sh@160 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:47.216 05:10:17 -- common/autotest_common.sh@163 -- # : 00:27:47.216 05:10:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:47.216 05:10:17 -- common/autotest_common.sh@165 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:47.216 05:10:17 -- common/autotest_common.sh@167 -- # : 0 00:27:47.216 05:10:17 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:47.216 05:10:17 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:47.216 05:10:17 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:47.216 05:10:17 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:47.216 05:10:17 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:47.216 05:10:17 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:47.217 05:10:17 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:47.217 05:10:17 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:47.217 05:10:17 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:47.217 05:10:17 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:47.217 05:10:17 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:47.217 05:10:17 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:47.217 05:10:17 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:47.217 05:10:17 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:47.217 05:10:17 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:47.217 05:10:17 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:47.217 05:10:17 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:47.217 05:10:17 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:47.217 05:10:17 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:47.217 05:10:17 -- common/autotest_common.sh@196 -- # cat 00:27:47.217 05:10:17 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:47.217 05:10:17 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:47.217 05:10:17 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:47.217 05:10:17 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:47.217 05:10:17 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:47.217 05:10:17 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:47.217 05:10:17 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:47.217 05:10:17 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:47.217 05:10:17 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:47.217 05:10:17 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:47.217 05:10:17 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:47.217 05:10:17 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:27:47.217 05:10:17 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:27:47.217 05:10:17 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:47.217 05:10:17 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:47.217 05:10:17 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:47.217 05:10:17 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:47.217 05:10:17 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:47.217 05:10:17 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:47.217 05:10:17 -- common/autotest_common.sh@249 -- # valgrind= 00:27:47.217 05:10:17 -- common/autotest_common.sh@255 -- # uname -s 00:27:47.217 05:10:17 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:47.217 05:10:17 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:47.217 05:10:17 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:47.217 05:10:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:47.217 05:10:17 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:47.217 05:10:17 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:47.217 05:10:17 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:47.217 05:10:17 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:47.217 05:10:17 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:47.217 05:10:17 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:47.217 05:10:17 -- common/autotest_common.sh@309 -- # [[ -z 145046 ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@309 -- # kill -0 145046 00:27:47.217 05:10:17 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:47.217 05:10:17 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:47.217 05:10:17 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:47.217 05:10:17 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:47.217 05:10:17 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:47.217 05:10:17 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:47.217 05:10:17 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:47.217 05:10:17 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.OlELyN 00:27:47.217 05:10:17 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:47.217 05:10:17 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:47.217 05:10:17 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.OlELyN/tests/interrupt /tmp/spdk.OlELyN 00:27:47.217 05:10:17 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@318 -- # df -T 00:27:47.217 05:10:17 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248964608 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4718592 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=9150537728 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=11449479168 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:27:47.217 05:10:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=94972997632 00:27:47.217 05:10:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:47.217 05:10:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4729782272 00:27:47.217 05:10:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:47.217 05:10:17 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:47.217 * Looking for test storage... 00:27:47.217 05:10:17 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:47.217 05:10:17 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:47.217 05:10:17 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:47.217 05:10:17 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.218 05:10:17 -- common/autotest_common.sh@363 -- # mount=/ 00:27:47.218 05:10:17 -- common/autotest_common.sh@365 -- # target_space=9150537728 00:27:47.218 05:10:17 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:47.218 05:10:17 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:47.218 05:10:17 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:27:47.218 05:10:17 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:27:47.218 05:10:17 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:27:47.218 05:10:17 -- common/autotest_common.sh@372 -- # new_size=13664071680 00:27:47.218 05:10:17 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:47.218 05:10:17 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.218 05:10:17 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.218 05:10:17 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:47.218 05:10:17 -- common/autotest_common.sh@380 -- # return 0 00:27:47.218 05:10:17 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:47.218 05:10:17 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:47.218 05:10:17 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:47.218 05:10:17 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:47.218 05:10:17 -- common/autotest_common.sh@1672 -- # true 00:27:47.218 05:10:17 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:47.218 05:10:17 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:47.218 05:10:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:47.218 05:10:17 -- common/autotest_common.sh@27 -- # exec 00:27:47.218 05:10:17 -- common/autotest_common.sh@29 -- # exec 00:27:47.218 05:10:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:47.218 05:10:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:47.218 05:10:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:47.218 05:10:17 -- common/autotest_common.sh@18 -- # set -x 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:47.218 05:10:17 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:47.218 05:10:17 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:47.218 05:10:17 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=145086 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 145086 /var/tmp/spdk.sock 00:27:47.218 05:10:17 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:47.218 05:10:17 -- common/autotest_common.sh@819 -- # '[' -z 145086 ']' 00:27:47.218 05:10:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.218 05:10:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:47.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.218 05:10:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.218 05:10:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:47.218 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:47.218 [2024-04-27 05:10:17.113760] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:47.218 [2024-04-27 05:10:17.113987] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145086 ] 00:27:47.477 [2024-04-27 05:10:17.294754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:47.477 [2024-04-27 05:10:17.393214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.477 [2024-04-27 05:10:17.393349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.477 [2024-04-27 05:10:17.393356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.736 [2024-04-27 05:10:17.519930] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:48.304 05:10:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.305 05:10:18 -- common/autotest_common.sh@852 -- # return 0 00:27:48.305 05:10:18 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:27:48.305 05:10:18 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:48.564 Malloc0 00:27:48.564 Malloc1 00:27:48.564 Malloc2 00:27:48.564 05:10:18 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:27:48.564 05:10:18 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:48.564 05:10:18 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:48.564 05:10:18 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:48.564 5000+0 records in 00:27:48.564 5000+0 records out 00:27:48.564 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0278372 s, 368 MB/s 00:27:48.564 05:10:18 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:48.823 AIO0 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 145086 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 145086 without_thd 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=145086 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:48.823 05:10:18 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:48.823 05:10:18 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:49.082 05:10:18 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:49.082 05:10:18 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:49.082 05:10:18 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:49.341 05:10:19 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:49.341 spdk_thread ids are 1 on reactor0. 00:27:49.341 05:10:19 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:49.341 05:10:19 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:49.341 05:10:19 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145086 0 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145086 0 idle 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:49.341 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145086 root 20 0 20.1t 75808 26184 S 6.7 0.6 0:00.41 reactor_0' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # echo 145086 root 20 0 20.1t 75808 26184 S 6.7 0.6 0:00.41 reactor_0 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:49.600 05:10:19 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:49.600 05:10:19 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145086 1 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145086 1 idle 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145095 root 20 0 20.1t 75808 26184 S 0.0 0.6 0:00.00 reactor_1' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # echo 145095 root 20 0 20.1t 75808 26184 S 0.0 0.6 0:00.00 reactor_1 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:49.600 05:10:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:49.601 05:10:19 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:49.601 05:10:19 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145086 2 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145086 2 idle 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:49.601 05:10:19 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145102 root 20 0 20.1t 75808 26184 S 0.0 0.6 0:00.00 reactor_2' 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@48 -- # echo 145102 root 20 0 20.1t 75808 26184 S 0.0 0.6 0:00.00 reactor_2 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:49.860 05:10:19 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:49.860 05:10:19 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:27:49.860 05:10:19 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:27:49.860 05:10:19 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:27:50.119 [2024-04-27 05:10:19.880606] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:50.119 05:10:19 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:50.377 [2024-04-27 05:10:20.080547] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:50.377 [2024-04-27 05:10:20.081771] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:50.377 05:10:20 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:50.636 [2024-04-27 05:10:20.308214] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:50.636 [2024-04-27 05:10:20.308901] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:50.636 05:10:20 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:50.636 05:10:20 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145086 0 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145086 0 busy 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145086 root 20 0 20.1t 75964 26184 R 93.8 0.6 0:00.81 reactor_0' 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@48 -- # echo 145086 root 20 0 20.1t 75964 26184 R 93.8 0.6 0:00.81 reactor_0 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:50.636 05:10:20 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:50.636 05:10:20 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145086 2 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145086 2 busy 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:50.636 05:10:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145102 root 20 0 20.1t 75964 26184 R 99.9 0.6 0:00.33 reactor_2' 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@48 -- # echo 145102 root 20 0 20.1t 75964 26184 R 99.9 0.6 0:00.33 reactor_2 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:50.895 05:10:20 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:50.895 05:10:20 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:51.153 [2024-04-27 05:10:20.936292] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:51.153 [2024-04-27 05:10:20.937531] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:51.153 05:10:20 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:27:51.154 05:10:20 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 145086 2 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145086 2 idle 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:51.154 05:10:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145102 root 20 0 20.1t 76028 26184 S 0.0 0.6 0:00.61 reactor_2' 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@48 -- # echo 145102 root 20 0 20.1t 76028 26184 S 0.0 0.6 0:00.61 reactor_2 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:51.413 05:10:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:51.413 05:10:21 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:51.671 [2024-04-27 05:10:21.380291] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:51.671 [2024-04-27 05:10:21.381249] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:51.671 05:10:21 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:27:51.671 05:10:21 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:27:51.671 05:10:21 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:27:51.930 [2024-04-27 05:10:21.636696] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:51.930 05:10:21 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 145086 0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145086 0 idle 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@33 -- # local pid=145086 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145086 -w 256 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145086 root 20 0 20.1t 76124 26184 S 0.0 0.6 0:01.71 reactor_0' 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@48 -- # echo 145086 root 20 0 20.1t 76124 26184 S 0.0 0.6 0:01.71 reactor_0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:51.930 05:10:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:51.930 05:10:21 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:51.930 05:10:21 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:27:51.930 05:10:21 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:27:51.930 05:10:21 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 145086 00:27:51.930 05:10:21 -- common/autotest_common.sh@926 -- # '[' -z 145086 ']' 00:27:51.930 05:10:21 -- common/autotest_common.sh@930 -- # kill -0 145086 00:27:51.930 05:10:21 -- common/autotest_common.sh@931 -- # uname 00:27:51.930 05:10:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.930 05:10:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145086 00:27:51.930 05:10:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:52.189 killing process with pid 145086 00:27:52.190 05:10:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:52.190 05:10:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145086' 00:27:52.190 05:10:21 -- common/autotest_common.sh@945 -- # kill 145086 00:27:52.190 05:10:21 -- common/autotest_common.sh@950 -- # wait 145086 00:27:52.449 05:10:22 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:52.449 05:10:22 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=145233 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:52.449 05:10:22 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 145233 /var/tmp/spdk.sock 00:27:52.449 05:10:22 -- common/autotest_common.sh@819 -- # '[' -z 145233 ']' 00:27:52.449 05:10:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.449 05:10:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:52.449 05:10:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.449 05:10:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:52.449 05:10:22 -- common/autotest_common.sh@10 -- # set +x 00:27:52.449 [2024-04-27 05:10:22.316467] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:52.449 [2024-04-27 05:10:22.316725] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145233 ] 00:27:52.707 [2024-04-27 05:10:22.485597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:52.707 [2024-04-27 05:10:22.592651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.707 [2024-04-27 05:10:22.592804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.707 [2024-04-27 05:10:22.592814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.967 [2024-04-27 05:10:22.710332] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:53.532 05:10:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:53.532 05:10:23 -- common/autotest_common.sh@852 -- # return 0 00:27:53.532 05:10:23 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:27:53.532 05:10:23 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:53.791 Malloc0 00:27:53.791 Malloc1 00:27:53.791 Malloc2 00:27:53.791 05:10:23 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:27:53.791 05:10:23 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:53.791 05:10:23 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:53.791 05:10:23 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:54.061 5000+0 records in 00:27:54.062 5000+0 records out 00:27:54.062 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0288496 s, 355 MB/s 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:54.062 AIO0 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 145233 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 145233 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=145233 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:54.062 05:10:23 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:54.062 05:10:23 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:54.320 05:10:24 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:54.320 05:10:24 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:54.320 05:10:24 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:54.579 spdk_thread ids are 1 on reactor0. 00:27:54.579 05:10:24 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:54.579 05:10:24 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:54.579 05:10:24 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:54.579 05:10:24 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145233 0 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145233 0 idle 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:54.579 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145233 root 20 0 20.1t 75760 26144 S 6.7 0.6 0:00.42 reactor_0' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # echo 145233 root 20 0 20.1t 75760 26144 S 6.7 0.6 0:00.42 reactor_0 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:54.865 05:10:24 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:54.865 05:10:24 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145233 1 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145233 1 idle 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145244 root 20 0 20.1t 75760 26144 S 0.0 0.6 0:00.00 reactor_1' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # echo 145244 root 20 0 20.1t 75760 26144 S 0.0 0.6 0:00.00 reactor_1 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:54.865 05:10:24 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:54.865 05:10:24 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 145233 2 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145233 2 idle 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:54.865 05:10:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145245 root 20 0 20.1t 75760 26144 S 0.0 0.6 0:00.00 reactor_2' 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@48 -- # echo 145245 root 20 0 20.1t 75760 26144 S 0.0 0.6 0:00.00 reactor_2 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:55.147 05:10:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:55.147 05:10:24 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:27:55.147 05:10:24 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:55.406 [2024-04-27 05:10:25.202702] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:55.406 [2024-04-27 05:10:25.203032] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:27:55.406 [2024-04-27 05:10:25.204406] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:55.406 05:10:25 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:55.665 [2024-04-27 05:10:25.470547] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:55.665 [2024-04-27 05:10:25.471038] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:55.665 05:10:25 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:55.665 05:10:25 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145233 0 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145233 0 busy 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:55.665 05:10:25 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145233 root 20 0 20.1t 75884 26144 R 99.9 0.6 0:00.86 reactor_0' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # echo 145233 root 20 0 20.1t 75884 26144 R 99.9 0.6 0:00.86 reactor_0 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:55.923 05:10:25 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:55.923 05:10:25 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 145233 2 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 145233 2 busy 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145245 root 20 0 20.1t 75884 26144 R 99.9 0.6 0:00.34 reactor_2' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # echo 145245 root 20 0 20.1t 75884 26144 R 99.9 0.6 0:00.34 reactor_2 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:55.923 05:10:25 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:55.923 05:10:25 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:56.182 [2024-04-27 05:10:26.098863] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:56.182 [2024-04-27 05:10:26.099096] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:56.440 05:10:26 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:27:56.440 05:10:26 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 145233 2 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145233 2 idle 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145245 root 20 0 20.1t 75980 26144 S 0.0 0.6 0:00.62 reactor_2' 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@48 -- # echo 145245 root 20 0 20.1t 75980 26144 S 0.0 0.6 0:00.62 reactor_2 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:56.440 05:10:26 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:56.441 05:10:26 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:56.441 05:10:26 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:56.441 05:10:26 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:56.441 05:10:26 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:56.441 05:10:26 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:56.441 05:10:26 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:27:56.699 [2024-04-27 05:10:26.534940] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:27:56.699 [2024-04-27 05:10:26.535432] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:27:56.699 [2024-04-27 05:10:26.535517] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:56.699 05:10:26 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:27:56.699 05:10:26 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 145233 0 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 145233 0 idle 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@33 -- # local pid=145233 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 145233 -w 256 00:27:56.699 05:10:26 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 145233 root 20 0 20.1t 76024 26144 S 6.7 0.6 0:01.76 reactor_0' 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@48 -- # echo 145233 root 20 0 20.1t 76024 26144 S 6.7 0.6 0:01.76 reactor_0 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:27:56.957 05:10:26 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:56.957 05:10:26 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:27:56.957 05:10:26 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:27:56.957 05:10:26 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:56.957 05:10:26 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 145233 00:27:56.958 05:10:26 -- common/autotest_common.sh@926 -- # '[' -z 145233 ']' 00:27:56.958 05:10:26 -- common/autotest_common.sh@930 -- # kill -0 145233 00:27:56.958 05:10:26 -- common/autotest_common.sh@931 -- # uname 00:27:56.958 05:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:56.958 05:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145233 00:27:56.958 05:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:56.958 05:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:56.958 killing process with pid 145233 00:27:56.958 05:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145233' 00:27:56.958 05:10:26 -- common/autotest_common.sh@945 -- # kill 145233 00:27:56.958 05:10:26 -- common/autotest_common.sh@950 -- # wait 145233 00:27:57.528 05:10:27 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:57.528 00:27:57.528 real 0m10.273s 00:27:57.528 user 0m10.401s 00:27:57.528 sys 0m1.621s 00:27:57.528 05:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.528 05:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.528 ************************************ 00:27:57.528 END TEST reactor_set_interrupt 00:27:57.528 ************************************ 00:27:57.528 05:10:27 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:57.528 05:10:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.528 05:10:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.528 05:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.528 ************************************ 00:27:57.528 START TEST reap_unregistered_poller 00:27:57.528 ************************************ 00:27:57.528 05:10:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:57.528 * Looking for test storage... 00:27:57.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.528 05:10:27 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:57.528 05:10:27 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:57.528 05:10:27 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:57.528 05:10:27 -- common/autotest_common.sh@34 -- # set -e 00:27:57.528 05:10:27 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:57.528 05:10:27 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:57.528 05:10:27 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:57.528 05:10:27 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:57.528 05:10:27 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:57.528 05:10:27 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:27:57.528 05:10:27 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:27:57.528 05:10:27 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:57.528 05:10:27 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:57.528 05:10:27 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:57.528 05:10:27 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:57.528 05:10:27 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:57.528 05:10:27 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:57.528 05:10:27 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:57.528 05:10:27 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:57.528 05:10:27 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:57.528 05:10:27 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:57.528 05:10:27 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:57.528 05:10:27 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:57.528 05:10:27 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:57.528 05:10:27 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:57.528 05:10:27 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:57.528 05:10:27 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:57.528 05:10:27 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:57.528 05:10:27 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:57.528 05:10:27 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:57.528 05:10:27 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:57.528 05:10:27 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:57.528 05:10:27 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:57.528 05:10:27 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:57.528 05:10:27 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:57.528 05:10:27 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:27:57.528 05:10:27 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:57.528 05:10:27 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:57.528 05:10:27 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:57.528 05:10:27 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:57.528 05:10:27 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:57.528 05:10:27 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:57.528 05:10:27 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:57.528 05:10:27 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:27:57.528 05:10:27 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:27:57.528 05:10:27 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:57.528 05:10:27 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:57.528 05:10:27 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:57.528 05:10:27 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:27:57.528 05:10:27 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:57.528 05:10:27 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:27:57.528 05:10:27 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:57.528 05:10:27 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:57.528 05:10:27 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:27:57.528 05:10:27 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:27:57.528 05:10:27 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:57.528 05:10:27 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:27:57.528 05:10:27 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:27:57.528 05:10:27 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:27:57.528 05:10:27 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:27:57.528 05:10:27 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:27:57.528 05:10:27 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:27:57.528 05:10:27 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:27:57.528 05:10:27 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:27:57.528 05:10:27 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:27:57.528 05:10:27 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:27:57.528 05:10:27 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:27:57.528 05:10:27 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:27:57.528 05:10:27 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:57.528 05:10:27 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:27:57.528 05:10:27 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:27:57.528 05:10:27 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:27:57.528 05:10:27 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:27:57.528 05:10:27 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:57.528 05:10:27 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:27:57.528 05:10:27 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:27:57.528 05:10:27 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:27:57.528 05:10:27 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:27:57.528 05:10:27 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:27:57.528 05:10:27 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:27:57.528 05:10:27 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:27:57.528 05:10:27 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:27:57.528 05:10:27 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:27:57.528 05:10:27 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:27:57.528 05:10:27 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:57.528 05:10:27 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:27:57.528 05:10:27 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:57.528 05:10:27 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:57.528 05:10:27 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:57.528 05:10:27 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:57.528 05:10:27 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:57.528 05:10:27 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:57.528 05:10:27 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:57.528 05:10:27 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:57.528 05:10:27 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:57.528 05:10:27 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:57.528 05:10:27 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:57.528 05:10:27 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:57.528 05:10:27 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:57.528 05:10:27 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:57.528 05:10:27 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:57.528 05:10:27 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:57.528 05:10:27 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:57.528 #define SPDK_CONFIG_H 00:27:57.528 #define SPDK_CONFIG_APPS 1 00:27:57.529 #define SPDK_CONFIG_ARCH native 00:27:57.529 #define SPDK_CONFIG_ASAN 1 00:27:57.529 #undef SPDK_CONFIG_AVAHI 00:27:57.529 #undef SPDK_CONFIG_CET 00:27:57.529 #define SPDK_CONFIG_COVERAGE 1 00:27:57.529 #define SPDK_CONFIG_CROSS_PREFIX 00:27:57.529 #undef SPDK_CONFIG_CRYPTO 00:27:57.529 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:57.529 #undef SPDK_CONFIG_CUSTOMOCF 00:27:57.529 #undef SPDK_CONFIG_DAOS 00:27:57.529 #define SPDK_CONFIG_DAOS_DIR 00:27:57.529 #define SPDK_CONFIG_DEBUG 1 00:27:57.529 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:57.529 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:27:57.529 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:27:57.529 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:27:57.529 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:57.529 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:57.529 #define SPDK_CONFIG_EXAMPLES 1 00:27:57.529 #undef SPDK_CONFIG_FC 00:27:57.529 #define SPDK_CONFIG_FC_PATH 00:27:57.529 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:57.529 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:57.529 #undef SPDK_CONFIG_FUSE 00:27:57.529 #undef SPDK_CONFIG_FUZZER 00:27:57.529 #define SPDK_CONFIG_FUZZER_LIB 00:27:57.529 #undef SPDK_CONFIG_GOLANG 00:27:57.529 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:57.529 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:57.529 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:57.529 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:57.529 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:57.529 #define SPDK_CONFIG_IDXD 1 00:27:57.529 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:57.529 #undef SPDK_CONFIG_IPSEC_MB 00:27:57.529 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:57.529 #define SPDK_CONFIG_ISAL 1 00:27:57.529 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:57.529 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:57.529 #define SPDK_CONFIG_LIBDIR 00:27:57.529 #undef SPDK_CONFIG_LTO 00:27:57.529 #define SPDK_CONFIG_MAX_LCORES 00:27:57.529 #define SPDK_CONFIG_NVME_CUSE 1 00:27:57.529 #undef SPDK_CONFIG_OCF 00:27:57.529 #define SPDK_CONFIG_OCF_PATH 00:27:57.529 #define SPDK_CONFIG_OPENSSL_PATH 00:27:57.529 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:57.529 #undef SPDK_CONFIG_PGO_USE 00:27:57.529 #define SPDK_CONFIG_PREFIX /usr/local 00:27:57.529 #define SPDK_CONFIG_RAID5F 1 00:27:57.529 #undef SPDK_CONFIG_RBD 00:27:57.529 #define SPDK_CONFIG_RDMA 1 00:27:57.529 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:57.529 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:57.529 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:57.529 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:57.529 #undef SPDK_CONFIG_SHARED 00:27:57.529 #undef SPDK_CONFIG_SMA 00:27:57.529 #define SPDK_CONFIG_TESTS 1 00:27:57.529 #undef SPDK_CONFIG_TSAN 00:27:57.529 #undef SPDK_CONFIG_UBLK 00:27:57.529 #define SPDK_CONFIG_UBSAN 1 00:27:57.529 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:57.529 #undef SPDK_CONFIG_URING 00:27:57.529 #define SPDK_CONFIG_URING_PATH 00:27:57.529 #undef SPDK_CONFIG_URING_ZNS 00:27:57.529 #undef SPDK_CONFIG_USDT 00:27:57.529 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:57.529 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:57.529 #undef SPDK_CONFIG_VFIO_USER 00:27:57.529 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:57.529 #define SPDK_CONFIG_VHOST 1 00:27:57.529 #define SPDK_CONFIG_VIRTIO 1 00:27:57.529 #undef SPDK_CONFIG_VTUNE 00:27:57.529 #define SPDK_CONFIG_VTUNE_DIR 00:27:57.529 #define SPDK_CONFIG_WERROR 1 00:27:57.529 #define SPDK_CONFIG_WPDK_DIR 00:27:57.529 #undef SPDK_CONFIG_XNVME 00:27:57.529 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:57.529 05:10:27 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:57.529 05:10:27 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.529 05:10:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.529 05:10:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.529 05:10:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.529 05:10:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:57.529 05:10:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:57.529 05:10:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:57.529 05:10:27 -- paths/export.sh@5 -- # export PATH 00:27:57.529 05:10:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:57.529 05:10:27 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:57.529 05:10:27 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:57.529 05:10:27 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:57.529 05:10:27 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:57.529 05:10:27 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:57.529 05:10:27 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:57.529 05:10:27 -- pm/common@16 -- # TEST_TAG=N/A 00:27:57.529 05:10:27 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:57.529 05:10:27 -- common/autotest_common.sh@52 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:57.529 05:10:27 -- common/autotest_common.sh@56 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:57.529 05:10:27 -- common/autotest_common.sh@58 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:57.529 05:10:27 -- common/autotest_common.sh@60 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:57.529 05:10:27 -- common/autotest_common.sh@62 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:57.529 05:10:27 -- common/autotest_common.sh@64 -- # : 00:27:57.529 05:10:27 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:57.529 05:10:27 -- common/autotest_common.sh@66 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:57.529 05:10:27 -- common/autotest_common.sh@68 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:57.529 05:10:27 -- common/autotest_common.sh@70 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:57.529 05:10:27 -- common/autotest_common.sh@72 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:57.529 05:10:27 -- common/autotest_common.sh@74 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:57.529 05:10:27 -- common/autotest_common.sh@76 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:57.529 05:10:27 -- common/autotest_common.sh@78 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:57.529 05:10:27 -- common/autotest_common.sh@80 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:57.529 05:10:27 -- common/autotest_common.sh@82 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:57.529 05:10:27 -- common/autotest_common.sh@84 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:57.529 05:10:27 -- common/autotest_common.sh@86 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:57.529 05:10:27 -- common/autotest_common.sh@88 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:57.529 05:10:27 -- common/autotest_common.sh@90 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:57.529 05:10:27 -- common/autotest_common.sh@92 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:57.529 05:10:27 -- common/autotest_common.sh@94 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:57.529 05:10:27 -- common/autotest_common.sh@96 -- # : rdma 00:27:57.529 05:10:27 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:57.529 05:10:27 -- common/autotest_common.sh@98 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:57.529 05:10:27 -- common/autotest_common.sh@100 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:57.529 05:10:27 -- common/autotest_common.sh@102 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:57.529 05:10:27 -- common/autotest_common.sh@104 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:57.529 05:10:27 -- common/autotest_common.sh@106 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:57.529 05:10:27 -- common/autotest_common.sh@108 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:57.529 05:10:27 -- common/autotest_common.sh@110 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:57.529 05:10:27 -- common/autotest_common.sh@112 -- # : 0 00:27:57.529 05:10:27 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:57.529 05:10:27 -- common/autotest_common.sh@114 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:57.529 05:10:27 -- common/autotest_common.sh@116 -- # : 1 00:27:57.529 05:10:27 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:57.529 05:10:27 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:27:57.530 05:10:27 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:57.530 05:10:27 -- common/autotest_common.sh@120 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:57.530 05:10:27 -- common/autotest_common.sh@122 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:57.530 05:10:27 -- common/autotest_common.sh@124 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:57.530 05:10:27 -- common/autotest_common.sh@126 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:57.530 05:10:27 -- common/autotest_common.sh@128 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:57.530 05:10:27 -- common/autotest_common.sh@130 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:57.530 05:10:27 -- common/autotest_common.sh@132 -- # : v23.11 00:27:57.530 05:10:27 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:57.530 05:10:27 -- common/autotest_common.sh@134 -- # : true 00:27:57.530 05:10:27 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:57.530 05:10:27 -- common/autotest_common.sh@136 -- # : 1 00:27:57.530 05:10:27 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:57.530 05:10:27 -- common/autotest_common.sh@138 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:57.530 05:10:27 -- common/autotest_common.sh@140 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:57.530 05:10:27 -- common/autotest_common.sh@142 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:57.530 05:10:27 -- common/autotest_common.sh@144 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:57.530 05:10:27 -- common/autotest_common.sh@146 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:57.530 05:10:27 -- common/autotest_common.sh@148 -- # : 00:27:57.530 05:10:27 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:57.530 05:10:27 -- common/autotest_common.sh@150 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:57.530 05:10:27 -- common/autotest_common.sh@152 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:57.530 05:10:27 -- common/autotest_common.sh@154 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:57.530 05:10:27 -- common/autotest_common.sh@156 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:57.530 05:10:27 -- common/autotest_common.sh@158 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:57.530 05:10:27 -- common/autotest_common.sh@160 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:57.530 05:10:27 -- common/autotest_common.sh@163 -- # : 00:27:57.530 05:10:27 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:57.530 05:10:27 -- common/autotest_common.sh@165 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:57.530 05:10:27 -- common/autotest_common.sh@167 -- # : 0 00:27:57.530 05:10:27 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:57.530 05:10:27 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:57.530 05:10:27 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:57.530 05:10:27 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:57.530 05:10:27 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:57.530 05:10:27 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:57.530 05:10:27 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:57.530 05:10:27 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:57.530 05:10:27 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:57.530 05:10:27 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:57.530 05:10:27 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:57.530 05:10:27 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:57.530 05:10:27 -- common/autotest_common.sh@196 -- # cat 00:27:57.530 05:10:27 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:57.530 05:10:27 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:57.530 05:10:27 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:57.530 05:10:27 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:57.530 05:10:27 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:57.530 05:10:27 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:57.530 05:10:27 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:57.530 05:10:27 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:57.530 05:10:27 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:57.530 05:10:27 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:57.530 05:10:27 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:57.530 05:10:27 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:27:57.530 05:10:27 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:27:57.530 05:10:27 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:57.530 05:10:27 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:57.530 05:10:27 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:57.530 05:10:27 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:57.530 05:10:27 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:57.530 05:10:27 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:57.530 05:10:27 -- common/autotest_common.sh@249 -- # valgrind= 00:27:57.530 05:10:27 -- common/autotest_common.sh@255 -- # uname -s 00:27:57.530 05:10:27 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:57.530 05:10:27 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:57.530 05:10:27 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:57.530 05:10:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:57.530 05:10:27 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:57.530 05:10:27 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:57.530 05:10:27 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:57.530 05:10:27 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:57.530 05:10:27 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:57.530 05:10:27 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:57.530 05:10:27 -- common/autotest_common.sh@309 -- # [[ -z 145403 ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@309 -- # kill -0 145403 00:27:57.530 05:10:27 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:57.530 05:10:27 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:57.530 05:10:27 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:57.530 05:10:27 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:57.530 05:10:27 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:57.530 05:10:27 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:57.530 05:10:27 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:57.530 05:10:27 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.dAjbva 00:27:57.530 05:10:27 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:57.530 05:10:27 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:57.530 05:10:27 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.dAjbva/tests/interrupt /tmp/spdk.dAjbva 00:27:57.530 05:10:27 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:57.530 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@318 -- # df -T 00:27:57.531 05:10:27 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248964608 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4718592 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=9150492672 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=11449524224 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:27:57.531 05:10:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=94966317056 00:27:57.531 05:10:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:57.531 05:10:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4736462848 00:27:57.531 05:10:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:57.531 05:10:27 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:57.531 * Looking for test storage... 00:27:57.531 05:10:27 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:57.531 05:10:27 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:57.531 05:10:27 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.531 05:10:27 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:57.531 05:10:27 -- common/autotest_common.sh@363 -- # mount=/ 00:27:57.531 05:10:27 -- common/autotest_common.sh@365 -- # target_space=9150492672 00:27:57.531 05:10:27 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:57.531 05:10:27 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:57.531 05:10:27 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:27:57.531 05:10:27 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:27:57.531 05:10:27 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:27:57.531 05:10:27 -- common/autotest_common.sh@372 -- # new_size=13664116736 00:27:57.531 05:10:27 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:57.531 05:10:27 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.531 05:10:27 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.531 05:10:27 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:57.531 05:10:27 -- common/autotest_common.sh@380 -- # return 0 00:27:57.531 05:10:27 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:57.531 05:10:27 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:57.531 05:10:27 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:57.531 05:10:27 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:57.531 05:10:27 -- common/autotest_common.sh@1672 -- # true 00:27:57.531 05:10:27 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:57.531 05:10:27 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:57.531 05:10:27 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:57.531 05:10:27 -- common/autotest_common.sh@27 -- # exec 00:27:57.531 05:10:27 -- common/autotest_common.sh@29 -- # exec 00:27:57.531 05:10:27 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:57.531 05:10:27 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:57.531 05:10:27 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:57.531 05:10:27 -- common/autotest_common.sh@18 -- # set -x 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:57.531 05:10:27 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:57.531 05:10:27 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:57.531 05:10:27 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=145443 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:57.531 05:10:27 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 145443 /var/tmp/spdk.sock 00:27:57.531 05:10:27 -- common/autotest_common.sh@819 -- # '[' -z 145443 ']' 00:27:57.531 05:10:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.531 05:10:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.531 05:10:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.531 05:10:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:57.531 05:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.531 [2024-04-27 05:10:27.437944] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:57.531 [2024-04-27 05:10:27.438170] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145443 ] 00:27:57.790 [2024-04-27 05:10:27.610273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.049 [2024-04-27 05:10:27.722909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.049 [2024-04-27 05:10:27.723073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.049 [2024-04-27 05:10:27.723066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.049 [2024-04-27 05:10:27.839126] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.617 05:10:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:58.617 05:10:28 -- common/autotest_common.sh@852 -- # return 0 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:27:58.617 05:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.617 05:10:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.617 05:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:27:58.617 "name": "app_thread", 00:27:58.617 "id": 1, 00:27:58.617 "active_pollers": [], 00:27:58.617 "timed_pollers": [ 00:27:58.617 { 00:27:58.617 "name": "rpc_subsystem_poll", 00:27:58.617 "id": 1, 00:27:58.617 "state": "waiting", 00:27:58.617 "run_count": 0, 00:27:58.617 "busy_count": 0, 00:27:58.617 "period_ticks": 8800000 00:27:58.617 } 00:27:58.617 ], 00:27:58.617 "paused_pollers": [] 00:27:58.617 }' 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:27:58.617 05:10:28 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:27:58.876 05:10:28 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:27:58.876 05:10:28 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:27:58.876 05:10:28 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:58.876 05:10:28 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:58.876 05:10:28 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:58.876 5000+0 records in 00:27:58.876 5000+0 records out 00:27:58.876 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0222763 s, 460 MB/s 00:27:58.876 05:10:28 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:59.135 AIO0 00:27:59.135 05:10:28 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:59.393 05:10:29 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:27:59.394 05:10:29 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:27:59.652 05:10:29 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:27:59.652 05:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.653 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.653 05:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:27:59.653 "name": "app_thread", 00:27:59.653 "id": 1, 00:27:59.653 "active_pollers": [], 00:27:59.653 "timed_pollers": [ 00:27:59.653 { 00:27:59.653 "name": "rpc_subsystem_poll", 00:27:59.653 "id": 1, 00:27:59.653 "state": "waiting", 00:27:59.653 "run_count": 0, 00:27:59.653 "busy_count": 0, 00:27:59.653 "period_ticks": 8800000 00:27:59.653 } 00:27:59.653 ], 00:27:59.653 "paused_pollers": [] 00:27:59.653 }' 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:59.653 05:10:29 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 145443 00:27:59.653 05:10:29 -- common/autotest_common.sh@926 -- # '[' -z 145443 ']' 00:27:59.653 05:10:29 -- common/autotest_common.sh@930 -- # kill -0 145443 00:27:59.653 05:10:29 -- common/autotest_common.sh@931 -- # uname 00:27:59.653 05:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:59.653 05:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145443 00:27:59.653 05:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:59.653 05:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:59.653 killing process with pid 145443 00:27:59.653 05:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145443' 00:27:59.653 05:10:29 -- common/autotest_common.sh@945 -- # kill 145443 00:27:59.653 05:10:29 -- common/autotest_common.sh@950 -- # wait 145443 00:28:00.220 05:10:29 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:28:00.220 05:10:29 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:28:00.220 ************************************ 00:28:00.220 END TEST reap_unregistered_poller 00:28:00.220 ************************************ 00:28:00.220 00:28:00.220 real 0m2.683s 00:28:00.220 user 0m1.856s 00:28:00.220 sys 0m0.570s 00:28:00.220 05:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.220 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:28:00.220 05:10:29 -- spdk/autotest.sh@204 -- # uname -s 00:28:00.220 05:10:29 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:28:00.220 05:10:29 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:28:00.220 05:10:29 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:28:00.220 05:10:29 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:28:00.220 05:10:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:00.220 05:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.220 05:10:29 -- common/autotest_common.sh@10 -- # set +x 00:28:00.220 ************************************ 00:28:00.220 START TEST spdk_dd 00:28:00.220 ************************************ 00:28:00.220 05:10:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:28:00.220 * Looking for test storage... 00:28:00.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:00.220 05:10:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:00.220 05:10:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.220 05:10:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.220 05:10:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.221 05:10:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:00.221 05:10:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:00.221 05:10:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:00.221 05:10:30 -- paths/export.sh@5 -- # export PATH 00:28:00.221 05:10:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:00.221 05:10:30 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:00.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:00.479 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:02.392 05:10:32 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:28:02.393 05:10:32 -- dd/dd.sh@11 -- # nvme_in_userspace 00:28:02.393 05:10:32 -- scripts/common.sh@311 -- # local bdf bdfs 00:28:02.393 05:10:32 -- scripts/common.sh@312 -- # local nvmes 00:28:02.393 05:10:32 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:28:02.393 05:10:32 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:02.393 05:10:32 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:28:02.393 05:10:32 -- scripts/common.sh@297 -- # local bdf= 00:28:02.393 05:10:32 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:28:02.393 05:10:32 -- scripts/common.sh@232 -- # local class 00:28:02.393 05:10:32 -- scripts/common.sh@233 -- # local subclass 00:28:02.393 05:10:32 -- scripts/common.sh@234 -- # local progif 00:28:02.393 05:10:32 -- scripts/common.sh@235 -- # printf %02x 1 00:28:02.393 05:10:32 -- scripts/common.sh@235 -- # class=01 00:28:02.393 05:10:32 -- scripts/common.sh@236 -- # printf %02x 8 00:28:02.393 05:10:32 -- scripts/common.sh@236 -- # subclass=08 00:28:02.393 05:10:32 -- scripts/common.sh@237 -- # printf %02x 2 00:28:02.393 05:10:32 -- scripts/common.sh@237 -- # progif=02 00:28:02.393 05:10:32 -- scripts/common.sh@239 -- # hash lspci 00:28:02.393 05:10:32 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:28:02.393 05:10:32 -- scripts/common.sh@242 -- # grep -i -- -p02 00:28:02.393 05:10:32 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:28:02.393 05:10:32 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:02.393 05:10:32 -- scripts/common.sh@244 -- # tr -d '"' 00:28:02.393 05:10:32 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:02.393 05:10:32 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:28:02.393 05:10:32 -- scripts/common.sh@15 -- # local i 00:28:02.393 05:10:32 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:28:02.393 05:10:32 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:02.393 05:10:32 -- scripts/common.sh@24 -- # return 0 00:28:02.393 05:10:32 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:28:02.393 05:10:32 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:28:02.393 05:10:32 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:28:02.393 05:10:32 -- scripts/common.sh@322 -- # uname -s 00:28:02.393 05:10:32 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:28:02.393 05:10:32 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:28:02.393 05:10:32 -- scripts/common.sh@327 -- # (( 1 )) 00:28:02.393 05:10:32 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:28:02.393 05:10:32 -- dd/dd.sh@13 -- # check_liburing 00:28:02.393 05:10:32 -- dd/common.sh@139 -- # local lib so 00:28:02.393 05:10:32 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:28:02.393 05:10:32 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:28:02.393 05:10:32 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:02.393 05:10:32 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:28:02.393 05:10:32 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:28:02.393 05:10:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:02.393 05:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.393 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.393 ************************************ 00:28:02.393 START TEST spdk_dd_basic_rw 00:28:02.393 ************************************ 00:28:02.393 05:10:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:28:02.393 * Looking for test storage... 00:28:02.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:02.393 05:10:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:02.393 05:10:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.393 05:10:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.393 05:10:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.393 05:10:32 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:02.393 05:10:32 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:02.393 05:10:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:02.393 05:10:32 -- paths/export.sh@5 -- # export PATH 00:28:02.393 05:10:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:02.393 05:10:32 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:28:02.393 05:10:32 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:28:02.393 05:10:32 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:28:02.393 05:10:32 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:28:02.393 05:10:32 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:28:02.393 05:10:32 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:28:02.393 05:10:32 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:28:02.393 05:10:32 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:02.393 05:10:32 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.393 05:10:32 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:28:02.393 05:10:32 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:28:02.393 05:10:32 -- dd/common.sh@126 -- # mapfile -t id 00:28:02.393 05:10:32 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:28:02.678 05:10:32 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 7 Host Read Commands: 2161 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:28:02.678 05:10:32 -- dd/common.sh@130 -- # lbaf=04 00:28:02.679 05:10:32 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 101 Data Units Written: 7 Host Read Commands: 2161 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:28:02.679 05:10:32 -- dd/common.sh@132 -- # lbaf=4096 00:28:02.679 05:10:32 -- dd/common.sh@134 -- # echo 4096 00:28:02.679 05:10:32 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:28:02.679 05:10:32 -- dd/basic_rw.sh@96 -- # : 00:28:02.679 05:10:32 -- dd/basic_rw.sh@96 -- # gen_conf 00:28:02.679 05:10:32 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:02.679 05:10:32 -- dd/common.sh@31 -- # xtrace_disable 00:28:02.679 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.679 05:10:32 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:28:02.679 05:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.679 05:10:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.679 ************************************ 00:28:02.679 START TEST dd_bs_lt_native_bs 00:28:02.679 ************************************ 00:28:02.679 05:10:32 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:02.679 05:10:32 -- common/autotest_common.sh@640 -- # local es=0 00:28:02.679 05:10:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:02.679 05:10:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.679 05:10:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.679 05:10:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.679 05:10:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.679 05:10:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.679 05:10:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.679 05:10:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.679 05:10:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:02.679 05:10:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:02.679 { 00:28:02.679 "subsystems": [ 00:28:02.679 { 00:28:02.679 "subsystem": "bdev", 00:28:02.679 "config": [ 00:28:02.679 { 00:28:02.679 "params": { 00:28:02.679 "trtype": "pcie", 00:28:02.679 "traddr": "0000:00:06.0", 00:28:02.679 "name": "Nvme0" 00:28:02.679 }, 00:28:02.679 "method": "bdev_nvme_attach_controller" 00:28:02.679 }, 00:28:02.679 { 00:28:02.679 "method": "bdev_wait_for_examine" 00:28:02.679 } 00:28:02.679 ] 00:28:02.679 } 00:28:02.679 ] 00:28:02.679 } 00:28:02.679 [2024-04-27 05:10:32.516429] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:02.679 [2024-04-27 05:10:32.516701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145754 ] 00:28:02.938 [2024-04-27 05:10:32.690884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.938 [2024-04-27 05:10:32.809046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.196 [2024-04-27 05:10:33.007831] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:28:03.196 [2024-04-27 05:10:33.008013] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.454 [2024-04-27 05:10:33.244443] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:03.713 05:10:33 -- common/autotest_common.sh@643 -- # es=234 00:28:03.713 05:10:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:03.713 05:10:33 -- common/autotest_common.sh@652 -- # es=106 00:28:03.713 05:10:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:03.713 05:10:33 -- common/autotest_common.sh@660 -- # es=1 00:28:03.713 05:10:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:03.713 00:28:03.713 real 0m0.989s 00:28:03.713 user 0m0.654s 00:28:03.713 sys 0m0.295s 00:28:03.713 05:10:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.713 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.713 ************************************ 00:28:03.713 END TEST dd_bs_lt_native_bs 00:28:03.713 ************************************ 00:28:03.713 05:10:33 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:28:03.713 05:10:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:03.713 05:10:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.713 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.713 ************************************ 00:28:03.713 START TEST dd_rw 00:28:03.713 ************************************ 00:28:03.713 05:10:33 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:28:03.713 05:10:33 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:28:03.713 05:10:33 -- dd/basic_rw.sh@12 -- # local count size 00:28:03.713 05:10:33 -- dd/basic_rw.sh@13 -- # local qds bss 00:28:03.713 05:10:33 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:28:03.713 05:10:33 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:03.713 05:10:33 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:03.713 05:10:33 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:03.713 05:10:33 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:03.713 05:10:33 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:03.713 05:10:33 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:03.713 05:10:33 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:03.713 05:10:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:03.713 05:10:33 -- dd/basic_rw.sh@23 -- # count=15 00:28:03.713 05:10:33 -- dd/basic_rw.sh@24 -- # count=15 00:28:03.713 05:10:33 -- dd/basic_rw.sh@25 -- # size=61440 00:28:03.713 05:10:33 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:28:03.713 05:10:33 -- dd/common.sh@98 -- # xtrace_disable 00:28:03.713 05:10:33 -- common/autotest_common.sh@10 -- # set +x 00:28:04.279 05:10:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:28:04.279 05:10:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:04.279 05:10:34 -- dd/common.sh@31 -- # xtrace_disable 00:28:04.279 05:10:34 -- common/autotest_common.sh@10 -- # set +x 00:28:04.279 { 00:28:04.279 "subsystems": [ 00:28:04.279 { 00:28:04.279 "subsystem": "bdev", 00:28:04.279 "config": [ 00:28:04.279 { 00:28:04.279 "params": { 00:28:04.279 "trtype": "pcie", 00:28:04.279 "traddr": "0000:00:06.0", 00:28:04.279 "name": "Nvme0" 00:28:04.279 }, 00:28:04.279 "method": "bdev_nvme_attach_controller" 00:28:04.279 }, 00:28:04.279 { 00:28:04.279 "method": "bdev_wait_for_examine" 00:28:04.279 } 00:28:04.279 ] 00:28:04.279 } 00:28:04.279 ] 00:28:04.279 } 00:28:04.279 [2024-04-27 05:10:34.139350] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:04.279 [2024-04-27 05:10:34.139620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145803 ] 00:28:04.538 [2024-04-27 05:10:34.309544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.538 [2024-04-27 05:10:34.427315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.385  Copying: 60/60 [kB] (average 19 MBps) 00:28:05.385 00:28:05.385 05:10:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:28:05.385 05:10:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:05.385 05:10:35 -- dd/common.sh@31 -- # xtrace_disable 00:28:05.385 05:10:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.385 { 00:28:05.385 "subsystems": [ 00:28:05.385 { 00:28:05.385 "subsystem": "bdev", 00:28:05.385 "config": [ 00:28:05.385 { 00:28:05.385 "params": { 00:28:05.385 "trtype": "pcie", 00:28:05.385 "traddr": "0000:00:06.0", 00:28:05.385 "name": "Nvme0" 00:28:05.385 }, 00:28:05.385 "method": "bdev_nvme_attach_controller" 00:28:05.385 }, 00:28:05.385 { 00:28:05.385 "method": "bdev_wait_for_examine" 00:28:05.385 } 00:28:05.385 ] 00:28:05.385 } 00:28:05.385 ] 00:28:05.385 } 00:28:05.385 [2024-04-27 05:10:35.132784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:05.385 [2024-04-27 05:10:35.133065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145822 ] 00:28:05.385 [2024-04-27 05:10:35.303523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.644 [2024-04-27 05:10:35.424205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.470  Copying: 60/60 [kB] (average 19 MBps) 00:28:06.470 00:28:06.470 05:10:36 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:06.470 05:10:36 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:28:06.470 05:10:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:06.470 05:10:36 -- dd/common.sh@11 -- # local nvme_ref= 00:28:06.470 05:10:36 -- dd/common.sh@12 -- # local size=61440 00:28:06.470 05:10:36 -- dd/common.sh@14 -- # local bs=1048576 00:28:06.470 05:10:36 -- dd/common.sh@15 -- # local count=1 00:28:06.470 05:10:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:06.470 05:10:36 -- dd/common.sh@18 -- # gen_conf 00:28:06.470 05:10:36 -- dd/common.sh@31 -- # xtrace_disable 00:28:06.470 05:10:36 -- common/autotest_common.sh@10 -- # set +x 00:28:06.470 [2024-04-27 05:10:36.199286] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:06.470 { 00:28:06.470 "subsystems": [ 00:28:06.470 { 00:28:06.470 "subsystem": "bdev", 00:28:06.470 "config": [ 00:28:06.470 { 00:28:06.470 "params": { 00:28:06.470 "trtype": "pcie", 00:28:06.470 "traddr": "0000:00:06.0", 00:28:06.470 "name": "Nvme0" 00:28:06.470 }, 00:28:06.470 "method": "bdev_nvme_attach_controller" 00:28:06.470 }, 00:28:06.470 { 00:28:06.470 "method": "bdev_wait_for_examine" 00:28:06.470 } 00:28:06.470 ] 00:28:06.470 } 00:28:06.470 ] 00:28:06.470 } 00:28:06.470 [2024-04-27 05:10:36.200114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145840 ] 00:28:06.470 [2024-04-27 05:10:36.373033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.728 [2024-04-27 05:10:36.502621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.243  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:07.243 00:28:07.243 05:10:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:07.243 05:10:37 -- dd/basic_rw.sh@23 -- # count=15 00:28:07.243 05:10:37 -- dd/basic_rw.sh@24 -- # count=15 00:28:07.243 05:10:37 -- dd/basic_rw.sh@25 -- # size=61440 00:28:07.243 05:10:37 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:28:07.243 05:10:37 -- dd/common.sh@98 -- # xtrace_disable 00:28:07.243 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:08.178 05:10:37 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:28:08.178 05:10:37 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:08.178 05:10:37 -- dd/common.sh@31 -- # xtrace_disable 00:28:08.178 05:10:37 -- common/autotest_common.sh@10 -- # set +x 00:28:08.178 { 00:28:08.178 "subsystems": [ 00:28:08.178 { 00:28:08.178 "subsystem": "bdev", 00:28:08.178 "config": [ 00:28:08.178 { 00:28:08.178 "params": { 00:28:08.178 "trtype": "pcie", 00:28:08.178 "traddr": "0000:00:06.0", 00:28:08.178 "name": "Nvme0" 00:28:08.178 }, 00:28:08.178 "method": "bdev_nvme_attach_controller" 00:28:08.178 }, 00:28:08.178 { 00:28:08.178 "method": "bdev_wait_for_examine" 00:28:08.178 } 00:28:08.178 ] 00:28:08.178 } 00:28:08.178 ] 00:28:08.178 } 00:28:08.178 [2024-04-27 05:10:37.849291] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:08.178 [2024-04-27 05:10:37.849583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145873 ] 00:28:08.178 [2024-04-27 05:10:38.020496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.436 [2024-04-27 05:10:38.147671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.003  Copying: 60/60 [kB] (average 58 MBps) 00:28:09.003 00:28:09.003 05:10:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:28:09.003 05:10:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:09.003 05:10:38 -- dd/common.sh@31 -- # xtrace_disable 00:28:09.003 05:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:09.003 [2024-04-27 05:10:38.831414] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:09.003 [2024-04-27 05:10:38.831793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145892 ] 00:28:09.003 { 00:28:09.003 "subsystems": [ 00:28:09.003 { 00:28:09.003 "subsystem": "bdev", 00:28:09.003 "config": [ 00:28:09.003 { 00:28:09.003 "params": { 00:28:09.003 "trtype": "pcie", 00:28:09.003 "traddr": "0000:00:06.0", 00:28:09.003 "name": "Nvme0" 00:28:09.003 }, 00:28:09.003 "method": "bdev_nvme_attach_controller" 00:28:09.003 }, 00:28:09.003 { 00:28:09.003 "method": "bdev_wait_for_examine" 00:28:09.003 } 00:28:09.003 ] 00:28:09.003 } 00:28:09.003 ] 00:28:09.003 } 00:28:09.262 [2024-04-27 05:10:39.007593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.262 [2024-04-27 05:10:39.133797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.124  Copying: 60/60 [kB] (average 58 MBps) 00:28:10.124 00:28:10.124 05:10:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.124 05:10:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:28:10.124 05:10:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:10.124 05:10:39 -- dd/common.sh@11 -- # local nvme_ref= 00:28:10.124 05:10:39 -- dd/common.sh@12 -- # local size=61440 00:28:10.124 05:10:39 -- dd/common.sh@14 -- # local bs=1048576 00:28:10.124 05:10:39 -- dd/common.sh@15 -- # local count=1 00:28:10.124 05:10:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:10.124 05:10:39 -- dd/common.sh@18 -- # gen_conf 00:28:10.124 05:10:39 -- dd/common.sh@31 -- # xtrace_disable 00:28:10.124 05:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:10.124 [2024-04-27 05:10:39.816186] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:10.124 [2024-04-27 05:10:39.816481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145909 ] 00:28:10.124 { 00:28:10.124 "subsystems": [ 00:28:10.124 { 00:28:10.124 "subsystem": "bdev", 00:28:10.124 "config": [ 00:28:10.124 { 00:28:10.124 "params": { 00:28:10.124 "trtype": "pcie", 00:28:10.124 "traddr": "0000:00:06.0", 00:28:10.124 "name": "Nvme0" 00:28:10.124 }, 00:28:10.124 "method": "bdev_nvme_attach_controller" 00:28:10.124 }, 00:28:10.124 { 00:28:10.124 "method": "bdev_wait_for_examine" 00:28:10.124 } 00:28:10.124 ] 00:28:10.124 } 00:28:10.124 ] 00:28:10.124 } 00:28:10.124 [2024-04-27 05:10:39.983214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.382 [2024-04-27 05:10:40.107949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.899  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:10.899 00:28:10.899 05:10:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:10.899 05:10:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:10.899 05:10:40 -- dd/basic_rw.sh@23 -- # count=7 00:28:10.899 05:10:40 -- dd/basic_rw.sh@24 -- # count=7 00:28:10.899 05:10:40 -- dd/basic_rw.sh@25 -- # size=57344 00:28:10.899 05:10:40 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:28:10.899 05:10:40 -- dd/common.sh@98 -- # xtrace_disable 00:28:10.899 05:10:40 -- common/autotest_common.sh@10 -- # set +x 00:28:11.465 05:10:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:28:11.465 05:10:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:11.465 05:10:41 -- dd/common.sh@31 -- # xtrace_disable 00:28:11.465 05:10:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.465 [2024-04-27 05:10:41.362219] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:11.465 [2024-04-27 05:10:41.362478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145933 ] 00:28:11.465 { 00:28:11.465 "subsystems": [ 00:28:11.465 { 00:28:11.465 "subsystem": "bdev", 00:28:11.465 "config": [ 00:28:11.465 { 00:28:11.465 "params": { 00:28:11.465 "trtype": "pcie", 00:28:11.465 "traddr": "0000:00:06.0", 00:28:11.465 "name": "Nvme0" 00:28:11.465 }, 00:28:11.465 "method": "bdev_nvme_attach_controller" 00:28:11.465 }, 00:28:11.465 { 00:28:11.465 "method": "bdev_wait_for_examine" 00:28:11.465 } 00:28:11.465 ] 00:28:11.465 } 00:28:11.465 ] 00:28:11.465 } 00:28:11.723 [2024-04-27 05:10:41.525671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.980 [2024-04-27 05:10:41.653482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.547  Copying: 56/56 [kB] (average 27 MBps) 00:28:12.547 00:28:12.547 05:10:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:28:12.547 05:10:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:12.547 05:10:42 -- dd/common.sh@31 -- # xtrace_disable 00:28:12.547 05:10:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.547 { 00:28:12.547 "subsystems": [ 00:28:12.547 { 00:28:12.547 "subsystem": "bdev", 00:28:12.547 "config": [ 00:28:12.547 { 00:28:12.547 "params": { 00:28:12.547 "trtype": "pcie", 00:28:12.547 "traddr": "0000:00:06.0", 00:28:12.547 "name": "Nvme0" 00:28:12.547 }, 00:28:12.547 "method": "bdev_nvme_attach_controller" 00:28:12.547 }, 00:28:12.547 { 00:28:12.547 "method": "bdev_wait_for_examine" 00:28:12.547 } 00:28:12.547 ] 00:28:12.547 } 00:28:12.547 ] 00:28:12.547 } 00:28:12.547 [2024-04-27 05:10:42.383593] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:12.547 [2024-04-27 05:10:42.383859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145949 ] 00:28:12.806 [2024-04-27 05:10:42.553013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.806 [2024-04-27 05:10:42.673216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.630  Copying: 56/56 [kB] (average 27 MBps) 00:28:13.630 00:28:13.630 05:10:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:13.630 05:10:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:28:13.630 05:10:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:13.630 05:10:43 -- dd/common.sh@11 -- # local nvme_ref= 00:28:13.630 05:10:43 -- dd/common.sh@12 -- # local size=57344 00:28:13.630 05:10:43 -- dd/common.sh@14 -- # local bs=1048576 00:28:13.630 05:10:43 -- dd/common.sh@15 -- # local count=1 00:28:13.630 05:10:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:13.630 05:10:43 -- dd/common.sh@18 -- # gen_conf 00:28:13.630 05:10:43 -- dd/common.sh@31 -- # xtrace_disable 00:28:13.630 05:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.630 { 00:28:13.630 "subsystems": [ 00:28:13.630 { 00:28:13.630 "subsystem": "bdev", 00:28:13.630 "config": [ 00:28:13.630 { 00:28:13.630 "params": { 00:28:13.630 "trtype": "pcie", 00:28:13.630 "traddr": "0000:00:06.0", 00:28:13.630 "name": "Nvme0" 00:28:13.630 }, 00:28:13.630 "method": "bdev_nvme_attach_controller" 00:28:13.630 }, 00:28:13.630 { 00:28:13.630 "method": "bdev_wait_for_examine" 00:28:13.630 } 00:28:13.630 ] 00:28:13.630 } 00:28:13.630 ] 00:28:13.630 } 00:28:13.630 [2024-04-27 05:10:43.413235] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:13.630 [2024-04-27 05:10:43.413509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145970 ] 00:28:13.889 [2024-04-27 05:10:43.584858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.889 [2024-04-27 05:10:43.699166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.714  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:14.714 00:28:14.714 05:10:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:14.714 05:10:44 -- dd/basic_rw.sh@23 -- # count=7 00:28:14.714 05:10:44 -- dd/basic_rw.sh@24 -- # count=7 00:28:14.714 05:10:44 -- dd/basic_rw.sh@25 -- # size=57344 00:28:14.714 05:10:44 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:28:14.714 05:10:44 -- dd/common.sh@98 -- # xtrace_disable 00:28:14.714 05:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.971 05:10:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:28:14.971 05:10:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:14.971 05:10:44 -- dd/common.sh@31 -- # xtrace_disable 00:28:14.971 05:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:15.230 { 00:28:15.230 "subsystems": [ 00:28:15.230 { 00:28:15.230 "subsystem": "bdev", 00:28:15.230 "config": [ 00:28:15.230 { 00:28:15.230 "params": { 00:28:15.230 "trtype": "pcie", 00:28:15.230 "traddr": "0000:00:06.0", 00:28:15.230 "name": "Nvme0" 00:28:15.230 }, 00:28:15.230 "method": "bdev_nvme_attach_controller" 00:28:15.230 }, 00:28:15.230 { 00:28:15.230 "method": "bdev_wait_for_examine" 00:28:15.230 } 00:28:15.230 ] 00:28:15.230 } 00:28:15.230 ] 00:28:15.230 } 00:28:15.230 [2024-04-27 05:10:44.934237] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:15.230 [2024-04-27 05:10:44.934553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145997 ] 00:28:15.230 [2024-04-27 05:10:45.107491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.490 [2024-04-27 05:10:45.231942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.007  Copying: 56/56 [kB] (average 54 MBps) 00:28:16.007 00:28:16.007 05:10:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:16.007 05:10:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:28:16.007 05:10:45 -- dd/common.sh@31 -- # xtrace_disable 00:28:16.007 05:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:16.007 { 00:28:16.007 "subsystems": [ 00:28:16.007 { 00:28:16.007 "subsystem": "bdev", 00:28:16.007 "config": [ 00:28:16.007 { 00:28:16.007 "params": { 00:28:16.007 "trtype": "pcie", 00:28:16.007 "traddr": "0000:00:06.0", 00:28:16.007 "name": "Nvme0" 00:28:16.007 }, 00:28:16.007 "method": "bdev_nvme_attach_controller" 00:28:16.007 }, 00:28:16.007 { 00:28:16.007 "method": "bdev_wait_for_examine" 00:28:16.007 } 00:28:16.007 ] 00:28:16.007 } 00:28:16.007 ] 00:28:16.007 } 00:28:16.007 [2024-04-27 05:10:45.886579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:16.007 [2024-04-27 05:10:45.886863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146016 ] 00:28:16.266 [2024-04-27 05:10:46.055111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.266 [2024-04-27 05:10:46.177154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.093  Copying: 56/56 [kB] (average 54 MBps) 00:28:17.093 00:28:17.093 05:10:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:17.093 05:10:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:28:17.093 05:10:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:17.093 05:10:46 -- dd/common.sh@11 -- # local nvme_ref= 00:28:17.093 05:10:46 -- dd/common.sh@12 -- # local size=57344 00:28:17.093 05:10:46 -- dd/common.sh@14 -- # local bs=1048576 00:28:17.093 05:10:46 -- dd/common.sh@15 -- # local count=1 00:28:17.093 05:10:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:17.093 05:10:46 -- dd/common.sh@18 -- # gen_conf 00:28:17.093 05:10:46 -- dd/common.sh@31 -- # xtrace_disable 00:28:17.093 05:10:46 -- common/autotest_common.sh@10 -- # set +x 00:28:17.093 { 00:28:17.093 "subsystems": [ 00:28:17.093 { 00:28:17.093 "subsystem": "bdev", 00:28:17.093 "config": [ 00:28:17.093 { 00:28:17.093 "params": { 00:28:17.093 "trtype": "pcie", 00:28:17.093 "traddr": "0000:00:06.0", 00:28:17.093 "name": "Nvme0" 00:28:17.093 }, 00:28:17.093 "method": "bdev_nvme_attach_controller" 00:28:17.093 }, 00:28:17.093 { 00:28:17.093 "method": "bdev_wait_for_examine" 00:28:17.093 } 00:28:17.093 ] 00:28:17.093 } 00:28:17.093 ] 00:28:17.093 } 00:28:17.093 [2024-04-27 05:10:46.831633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:17.093 [2024-04-27 05:10:46.831932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146031 ] 00:28:17.093 [2024-04-27 05:10:47.001119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.352 [2024-04-27 05:10:47.127000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.869  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:17.869 00:28:17.869 05:10:47 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:17.869 05:10:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:17.869 05:10:47 -- dd/basic_rw.sh@23 -- # count=3 00:28:17.869 05:10:47 -- dd/basic_rw.sh@24 -- # count=3 00:28:17.869 05:10:47 -- dd/basic_rw.sh@25 -- # size=49152 00:28:17.869 05:10:47 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:28:17.869 05:10:47 -- dd/common.sh@98 -- # xtrace_disable 00:28:17.869 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:18.436 05:10:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:28:18.436 05:10:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:18.436 05:10:48 -- dd/common.sh@31 -- # xtrace_disable 00:28:18.436 05:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:18.436 { 00:28:18.436 "subsystems": [ 00:28:18.436 { 00:28:18.436 "subsystem": "bdev", 00:28:18.436 "config": [ 00:28:18.436 { 00:28:18.436 "params": { 00:28:18.436 "trtype": "pcie", 00:28:18.436 "traddr": "0000:00:06.0", 00:28:18.436 "name": "Nvme0" 00:28:18.436 }, 00:28:18.436 "method": "bdev_nvme_attach_controller" 00:28:18.436 }, 00:28:18.436 { 00:28:18.436 "method": "bdev_wait_for_examine" 00:28:18.436 } 00:28:18.436 ] 00:28:18.436 } 00:28:18.436 ] 00:28:18.436 } 00:28:18.436 [2024-04-27 05:10:48.313696] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:18.436 [2024-04-27 05:10:48.314618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146061 ] 00:28:18.694 [2024-04-27 05:10:48.485300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.694 [2024-04-27 05:10:48.603015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.519  Copying: 48/48 [kB] (average 46 MBps) 00:28:19.519 00:28:19.519 05:10:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:28:19.519 05:10:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:19.519 05:10:49 -- dd/common.sh@31 -- # xtrace_disable 00:28:19.519 05:10:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.519 [2024-04-27 05:10:49.233842] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:19.519 [2024-04-27 05:10:49.234400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146081 ] 00:28:19.519 { 00:28:19.519 "subsystems": [ 00:28:19.519 { 00:28:19.519 "subsystem": "bdev", 00:28:19.519 "config": [ 00:28:19.519 { 00:28:19.519 "params": { 00:28:19.519 "trtype": "pcie", 00:28:19.519 "traddr": "0000:00:06.0", 00:28:19.519 "name": "Nvme0" 00:28:19.519 }, 00:28:19.519 "method": "bdev_nvme_attach_controller" 00:28:19.519 }, 00:28:19.519 { 00:28:19.519 "method": "bdev_wait_for_examine" 00:28:19.519 } 00:28:19.519 ] 00:28:19.519 } 00:28:19.519 ] 00:28:19.519 } 00:28:19.519 [2024-04-27 05:10:49.391975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.778 [2024-04-27 05:10:49.508947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.295  Copying: 48/48 [kB] (average 46 MBps) 00:28:20.295 00:28:20.295 05:10:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:20.295 05:10:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:20.295 05:10:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:20.295 05:10:50 -- dd/common.sh@11 -- # local nvme_ref= 00:28:20.295 05:10:50 -- dd/common.sh@12 -- # local size=49152 00:28:20.295 05:10:50 -- dd/common.sh@14 -- # local bs=1048576 00:28:20.295 05:10:50 -- dd/common.sh@15 -- # local count=1 00:28:20.295 05:10:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:20.295 05:10:50 -- dd/common.sh@18 -- # gen_conf 00:28:20.295 05:10:50 -- dd/common.sh@31 -- # xtrace_disable 00:28:20.295 05:10:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.295 [2024-04-27 05:10:50.161533] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:20.295 [2024-04-27 05:10:50.161787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146097 ] 00:28:20.295 { 00:28:20.295 "subsystems": [ 00:28:20.295 { 00:28:20.295 "subsystem": "bdev", 00:28:20.295 "config": [ 00:28:20.295 { 00:28:20.295 "params": { 00:28:20.295 "trtype": "pcie", 00:28:20.295 "traddr": "0000:00:06.0", 00:28:20.295 "name": "Nvme0" 00:28:20.295 }, 00:28:20.295 "method": "bdev_nvme_attach_controller" 00:28:20.295 }, 00:28:20.295 { 00:28:20.295 "method": "bdev_wait_for_examine" 00:28:20.295 } 00:28:20.295 ] 00:28:20.295 } 00:28:20.295 ] 00:28:20.295 } 00:28:20.553 [2024-04-27 05:10:50.330228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.553 [2024-04-27 05:10:50.447548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.379  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:21.379 00:28:21.379 05:10:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:21.379 05:10:51 -- dd/basic_rw.sh@23 -- # count=3 00:28:21.379 05:10:51 -- dd/basic_rw.sh@24 -- # count=3 00:28:21.379 05:10:51 -- dd/basic_rw.sh@25 -- # size=49152 00:28:21.379 05:10:51 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:28:21.379 05:10:51 -- dd/common.sh@98 -- # xtrace_disable 00:28:21.379 05:10:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.948 05:10:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:28:21.948 05:10:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:21.948 05:10:51 -- dd/common.sh@31 -- # xtrace_disable 00:28:21.948 05:10:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.948 { 00:28:21.948 "subsystems": [ 00:28:21.948 { 00:28:21.948 "subsystem": "bdev", 00:28:21.948 "config": [ 00:28:21.948 { 00:28:21.948 "params": { 00:28:21.948 "trtype": "pcie", 00:28:21.948 "traddr": "0000:00:06.0", 00:28:21.948 "name": "Nvme0" 00:28:21.948 }, 00:28:21.948 "method": "bdev_nvme_attach_controller" 00:28:21.948 }, 00:28:21.948 { 00:28:21.948 "method": "bdev_wait_for_examine" 00:28:21.948 } 00:28:21.948 ] 00:28:21.948 } 00:28:21.948 ] 00:28:21.948 } 00:28:21.948 [2024-04-27 05:10:51.621480] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:21.948 [2024-04-27 05:10:51.621752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146121 ] 00:28:21.948 [2024-04-27 05:10:51.793034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.207 [2024-04-27 05:10:51.901656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.775  Copying: 48/48 [kB] (average 46 MBps) 00:28:22.775 00:28:22.775 05:10:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:28:22.775 05:10:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:22.775 05:10:52 -- dd/common.sh@31 -- # xtrace_disable 00:28:22.775 05:10:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.775 { 00:28:22.775 "subsystems": [ 00:28:22.775 { 00:28:22.775 "subsystem": "bdev", 00:28:22.775 "config": [ 00:28:22.775 { 00:28:22.775 "params": { 00:28:22.775 "trtype": "pcie", 00:28:22.775 "traddr": "0000:00:06.0", 00:28:22.775 "name": "Nvme0" 00:28:22.775 }, 00:28:22.775 "method": "bdev_nvme_attach_controller" 00:28:22.775 }, 00:28:22.775 { 00:28:22.775 "method": "bdev_wait_for_examine" 00:28:22.775 } 00:28:22.775 ] 00:28:22.775 } 00:28:22.775 ] 00:28:22.775 } 00:28:22.775 [2024-04-27 05:10:52.568725] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:22.775 [2024-04-27 05:10:52.569575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146137 ] 00:28:23.035 [2024-04-27 05:10:52.740617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.035 [2024-04-27 05:10:52.849651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.551  Copying: 48/48 [kB] (average 46 MBps) 00:28:23.551 00:28:23.551 05:10:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:23.551 05:10:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:23.551 05:10:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:23.551 05:10:53 -- dd/common.sh@11 -- # local nvme_ref= 00:28:23.551 05:10:53 -- dd/common.sh@12 -- # local size=49152 00:28:23.551 05:10:53 -- dd/common.sh@14 -- # local bs=1048576 00:28:23.551 05:10:53 -- dd/common.sh@15 -- # local count=1 00:28:23.551 05:10:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:23.551 05:10:53 -- dd/common.sh@18 -- # gen_conf 00:28:23.551 05:10:53 -- dd/common.sh@31 -- # xtrace_disable 00:28:23.551 05:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.810 [2024-04-27 05:10:53.477784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:23.810 [2024-04-27 05:10:53.478034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146158 ] 00:28:23.810 { 00:28:23.810 "subsystems": [ 00:28:23.810 { 00:28:23.810 "subsystem": "bdev", 00:28:23.810 "config": [ 00:28:23.810 { 00:28:23.810 "params": { 00:28:23.810 "trtype": "pcie", 00:28:23.810 "traddr": "0000:00:06.0", 00:28:23.810 "name": "Nvme0" 00:28:23.810 }, 00:28:23.810 "method": "bdev_nvme_attach_controller" 00:28:23.810 }, 00:28:23.810 { 00:28:23.810 "method": "bdev_wait_for_examine" 00:28:23.810 } 00:28:23.810 ] 00:28:23.810 } 00:28:23.810 ] 00:28:23.810 } 00:28:23.810 [2024-04-27 05:10:53.646175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.069 [2024-04-27 05:10:53.769636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.636  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:24.636 00:28:24.636 00:28:24.636 real 0m20.903s 00:28:24.636 user 0m14.199s 00:28:24.636 sys 0m5.339s 00:28:24.636 05:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.636 05:10:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.636 ************************************ 00:28:24.636 END TEST dd_rw 00:28:24.636 ************************************ 00:28:24.636 05:10:54 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:28:24.636 05:10:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.636 05:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.636 05:10:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.636 ************************************ 00:28:24.636 START TEST dd_rw_offset 00:28:24.636 ************************************ 00:28:24.636 05:10:54 -- common/autotest_common.sh@1104 -- # basic_offset 00:28:24.636 05:10:54 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:28:24.636 05:10:54 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:28:24.636 05:10:54 -- dd/common.sh@98 -- # xtrace_disable 00:28:24.636 05:10:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.636 05:10:54 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:28:24.636 05:10:54 -- dd/basic_rw.sh@56 -- # data=16ictfwxptxdrbmvd0ks5q5vug19x2hmrdzor9nzc8kvjg5lp71l277xbxhry82ko4i65vcoiora9r9iqwzuhe8dv0xx5bzwtqw4xuzujbnkyuwrxedic397gd6uw0vwpr75xugwc177ihw83erzk1upogruryfgrebd2q9ikz63ohih2e0z4taa2stxehxhduehtamhak2qiei90k9vdj4pf01f0fuoe2044owy6hyhmruycu34pt5sz7gmxnq1qz6lmn861dc6jxlzn1dqwmi2st73qrzhsyyd3y361sxjgivcrl345sybofekvoij5f4f5oc4qf9nazqg34ju7cnuwgkp9ci6tky99uevco76w7edbhkutx6vkaa7wwwd7280z1oqba0pahb0irnw9udsdpdshy06gy3ag37yfq230vvrvh66vwepxxyzw7gmnx80vx2j310b2yam73cofr1fg35pmozg5uekuqfp0cwjld62vwlh0skyso1muyqj9booh8fg7lie0iks8ymi11adhapzi3a39j8mor70q7ixrbar8jj4p32o6uxw5b849u711yjv8vhdkuosvrakd2m12vfrrtbmvawoy58m09zewxrtpf6hd93eaud0nklp557v1kl5qeofr4usui666nl2zwxgb263cwyxzkcg6cu83bje7yvh5g23ws1wh26e4kctlcao86mth51crivbx658yxlxmot0qf7pu7i0zh72fi24ltm6hba0vqkepi4gmipcj61j0eproschzflfj0ejcv0ujdbo8ba76kewwi70bwiz61u78j4y5izlb2fas9h9dmhe7v5327ysxhju11w0ysr45jigyleetlqp9routj7qzfxqjv73m3ic1jf5t5ddu4lpytgtw6dbje5ao5s75zskjupo9r641qjkrklcohb698h6t3iwy08jx62zn747j655i2aybxy4xr8bq5vruhfgqc49qc1a2ox1wgav9gag172u73cjjvg5dqccrfjkjrjzxcttmixg8erqmg9rw9fg266i3d31jcdxk7ety0rxi6ne1prir2b01tfb7149q7c78mykuus87yzmdr5zj134imtf2cgy7tv7kni05e888dk0xasuynj40s8t8asq1jhygqke2qcbs84b827qye9xi9ehtdllecnrh6a7jclofa7hqalbq7z2w95kuz5w00voebc4g22u3kvi9dvjxwsbziql6es2pdgpjijj2rn0vfno9i4j1ie6v9hw6onjwbs5uzly96oqp4ionulx7fxxnimd75tinpha6x2j29ik8oozr1cpakoyuoydjcu7cdfol7adw5mgmcpw25ifoxszmihvry2s8bf6y3yzb29fgdga2rguq1zku0e5g8kej2phc2wtmdtm29o6yn12yhyk7abdp8h3itj8g54xvja0hhjyscfygrs4fbfjo447yqpz5jlv96n9isex84a4ncn7x4n2bh8enoirudld74c52ufarmjel9034to8hkoqrb2ossz49ccuazzk67kwehi59uof74qygfx48y1qjmilhavjy3rblm52528mnovz7hhldoftie7sai06cu00ggtqvjpkrlwz2m820ld29mcxh1042jnpirtd6vjeo6whhsc5igi7j8ficdlpj3wcl89kfqfy0d8e11oybi1ai65nd104e77zkkizl3n507q9vuc57uclb5gw5j4dlfg9u7xcdv0kbapqkc6n6zma0snnyjo08gg0f6d9apv2my1c7fqpvca6fiuo144rkqmh1apo2x0dsk2axk44woyun033x5snthfdml1s6erufmugk5fsn3nqyt785jz4qk893h3kglquozex5ftfbr3m2a74cp4orgrkqlft79emfe7pj7qjz146iutavrfv3pyn3wp7sn36xn322amcmacv8ugynql8b6n98ygm8o2azsspkkfl475tx999ks4j2hfjakzm35o0snynqpg82cmnp2i2f3ja3yrjmf8z5c7lmlkntu99losyb3rnd82pjbabmr5eoynp30or2npq1sia90rkcged33ggx9259b6zp0zfgnto3kh65yqjr7qp4us0dwtrwcj9ylud9cs77cr7llyhsc5hodjq89umsira7e1vgit9og5liakpvup8qz6cgol1jbwmjkhe1ztb5j2zizfax0pgjvimuwm4bvgq9yot5odfmilccgmsj1ja7lzsjex76ff289pgo6s55fov5afigtm8kvtfc9t3j748370iw6os9pevo2yzafbzu4gy80y2m2galtzdyno5a8vaw3ab2jt3tq99hkhg308s9dxy5r7dfgx7ykxi3qfebz42nut6dtiyjubuj70qnpeff6hwnmgtodlixhdespeadedh190ku34fq57awueg0wlk1kq5jn5fdgowl5eds12hzqz7z8kcicd6j83i1nw8pv98ho7nvmk5mnkgi0bqntoxyx1mfprypq35p0m8ksbzsnjwfk4qj0ktzr08u4y9d8hc3mi3920lp3p9utw1sfd0qla1yy2it2j2a25nl4x3kodcv3l3a3vu9af3i14m2gtg0leply9fxef0q6d1xfl9k6lnbom8km4w8quyd2gaduylyxv64hhbxfb3mgeroe2hdojw7i7i15vqwqkqy11o0x5jdwtqzx3yxebidvx16c9nsk4f8pmpiv8a0mwvjfxz6n8j87jm3gsz8lrp9fsq51zqtsl2vc96byjunus6hfd0wvkwuxy8jg3sel4xrrwrkrnagvxnka3ybigeqdeoug2sy0b23mcnnepkohv3kgz1yhyl9fh4axt6ha25edotb01ft38ax6kvrz7fqshmvpyz5m7xqiwg82xexd8u5005vt1s8giqzpknwg1ohqucwk1vmea1t6pqr8zxrhudikjlklg3fz1ebnbhq5c4kxwh7l665xx70fp904pm1ttkn5hd1ws5cjr754q61yyw52k2hmk36kmtalr0eo88vmnmsbwb0kte6ceh6sf33sp7a8qy3uye0xnmscxa4woyysh3k664riuibkq5kdm8g7lq7eqybbtitbvo0l5yrnot6w387an0shr1wc0rrlhv5iuld69r3iu4f1jblgdgkusn1g1hxts7mxg6sf6gx7g3aq0gtb99o2aduup9s6xkf20ea6wnmfu9izwlhgcgee6hooozb4hrncb4vqvqlpknf3cc5eyqjop2ilta9oi2l1lfp9trxfydfcyefae8a51x7nfcgh53d7zpbqihvdselutml719j1oouh5shn3ho7z8y2fqw765rrt7u63c5zzlqj4whadfocehfzcwhekbau7dek7lynm8vzjwplp8zim1dyeiqn8c5rmqjehsm2cjlthaqlucnn2g99jo5c95e2vbbpw54d9tjsrbdounh4c1v0445mxtdng3o0qdn05xubko5vetqqjctfntr5nkx1hs08tv38bj8h0793s0fnew226wafj3y81bf53irv28z3rkfe6bei4o4farbz0p2zbxxkje3ajgp8nparsf2ms9raovpo6e416qrn2ssxmd6c3ud5w534w6b63w6ewsaf3ku04xs96xlsm73oodm7xb61wrmhs2f2cqoykf2hio80e1mq38173znkyht0i5ezac3jq1nbs4bbjgxp2y5zx9r4pckmgl6oq14bem2bkyu0z9v4t8jfcgc8eensjzrqvauz0oecm6e2z5k2ep0o3ac1n7qmd87hu9wyx7jr7tbv12e5mtre4lmul3kaijbvwxz5cnij792zhwh31254ef5f0xdo9p9elk5v33wqkmjr5ijke7deujg4hd10mbma987mwhfsqcx7sn55v8x5h1xyvfkndgt0g1qmlcc8z1xpp73m66ihdj3md9uyl0e7098e2juz4xm71csqm4fr4seyhon06ajhh4u1764zhr5ied5ex2u5vynlec6k76392plwifawt8hcm0ry38mifa1rx7ql0kmrwhkxby38hi4spny5wfjugjw4q9 00:28:24.636 05:10:54 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:28:24.636 05:10:54 -- dd/basic_rw.sh@59 -- # gen_conf 00:28:24.636 05:10:54 -- dd/common.sh@31 -- # xtrace_disable 00:28:24.636 05:10:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.895 [2024-04-27 05:10:54.564053] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:24.895 [2024-04-27 05:10:54.564307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146198 ] 00:28:24.895 { 00:28:24.895 "subsystems": [ 00:28:24.895 { 00:28:24.895 "subsystem": "bdev", 00:28:24.895 "config": [ 00:28:24.895 { 00:28:24.895 "params": { 00:28:24.895 "trtype": "pcie", 00:28:24.895 "traddr": "0000:00:06.0", 00:28:24.895 "name": "Nvme0" 00:28:24.895 }, 00:28:24.895 "method": "bdev_nvme_attach_controller" 00:28:24.895 }, 00:28:24.895 { 00:28:24.895 "method": "bdev_wait_for_examine" 00:28:24.895 } 00:28:24.895 ] 00:28:24.895 } 00:28:24.895 ] 00:28:24.895 } 00:28:24.895 [2024-04-27 05:10:54.734956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.153 [2024-04-27 05:10:54.851804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.720  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:25.720 00:28:25.720 05:10:55 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:28:25.720 05:10:55 -- dd/basic_rw.sh@65 -- # gen_conf 00:28:25.720 05:10:55 -- dd/common.sh@31 -- # xtrace_disable 00:28:25.720 05:10:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.720 [2024-04-27 05:10:55.535950] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:25.720 [2024-04-27 05:10:55.536255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146221 ] 00:28:25.720 { 00:28:25.720 "subsystems": [ 00:28:25.720 { 00:28:25.720 "subsystem": "bdev", 00:28:25.720 "config": [ 00:28:25.720 { 00:28:25.720 "params": { 00:28:25.720 "trtype": "pcie", 00:28:25.720 "traddr": "0000:00:06.0", 00:28:25.720 "name": "Nvme0" 00:28:25.720 }, 00:28:25.720 "method": "bdev_nvme_attach_controller" 00:28:25.720 }, 00:28:25.720 { 00:28:25.720 "method": "bdev_wait_for_examine" 00:28:25.720 } 00:28:25.720 ] 00:28:25.720 } 00:28:25.720 ] 00:28:25.720 } 00:28:25.979 [2024-04-27 05:10:55.710737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.979 [2024-04-27 05:10:55.827115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.805  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:26.805 00:28:26.805 05:10:56 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:28:26.806 05:10:56 -- dd/basic_rw.sh@72 -- # [[ 16ictfwxptxdrbmvd0ks5q5vug19x2hmrdzor9nzc8kvjg5lp71l277xbxhry82ko4i65vcoiora9r9iqwzuhe8dv0xx5bzwtqw4xuzujbnkyuwrxedic397gd6uw0vwpr75xugwc177ihw83erzk1upogruryfgrebd2q9ikz63ohih2e0z4taa2stxehxhduehtamhak2qiei90k9vdj4pf01f0fuoe2044owy6hyhmruycu34pt5sz7gmxnq1qz6lmn861dc6jxlzn1dqwmi2st73qrzhsyyd3y361sxjgivcrl345sybofekvoij5f4f5oc4qf9nazqg34ju7cnuwgkp9ci6tky99uevco76w7edbhkutx6vkaa7wwwd7280z1oqba0pahb0irnw9udsdpdshy06gy3ag37yfq230vvrvh66vwepxxyzw7gmnx80vx2j310b2yam73cofr1fg35pmozg5uekuqfp0cwjld62vwlh0skyso1muyqj9booh8fg7lie0iks8ymi11adhapzi3a39j8mor70q7ixrbar8jj4p32o6uxw5b849u711yjv8vhdkuosvrakd2m12vfrrtbmvawoy58m09zewxrtpf6hd93eaud0nklp557v1kl5qeofr4usui666nl2zwxgb263cwyxzkcg6cu83bje7yvh5g23ws1wh26e4kctlcao86mth51crivbx658yxlxmot0qf7pu7i0zh72fi24ltm6hba0vqkepi4gmipcj61j0eproschzflfj0ejcv0ujdbo8ba76kewwi70bwiz61u78j4y5izlb2fas9h9dmhe7v5327ysxhju11w0ysr45jigyleetlqp9routj7qzfxqjv73m3ic1jf5t5ddu4lpytgtw6dbje5ao5s75zskjupo9r641qjkrklcohb698h6t3iwy08jx62zn747j655i2aybxy4xr8bq5vruhfgqc49qc1a2ox1wgav9gag172u73cjjvg5dqccrfjkjrjzxcttmixg8erqmg9rw9fg266i3d31jcdxk7ety0rxi6ne1prir2b01tfb7149q7c78mykuus87yzmdr5zj134imtf2cgy7tv7kni05e888dk0xasuynj40s8t8asq1jhygqke2qcbs84b827qye9xi9ehtdllecnrh6a7jclofa7hqalbq7z2w95kuz5w00voebc4g22u3kvi9dvjxwsbziql6es2pdgpjijj2rn0vfno9i4j1ie6v9hw6onjwbs5uzly96oqp4ionulx7fxxnimd75tinpha6x2j29ik8oozr1cpakoyuoydjcu7cdfol7adw5mgmcpw25ifoxszmihvry2s8bf6y3yzb29fgdga2rguq1zku0e5g8kej2phc2wtmdtm29o6yn12yhyk7abdp8h3itj8g54xvja0hhjyscfygrs4fbfjo447yqpz5jlv96n9isex84a4ncn7x4n2bh8enoirudld74c52ufarmjel9034to8hkoqrb2ossz49ccuazzk67kwehi59uof74qygfx48y1qjmilhavjy3rblm52528mnovz7hhldoftie7sai06cu00ggtqvjpkrlwz2m820ld29mcxh1042jnpirtd6vjeo6whhsc5igi7j8ficdlpj3wcl89kfqfy0d8e11oybi1ai65nd104e77zkkizl3n507q9vuc57uclb5gw5j4dlfg9u7xcdv0kbapqkc6n6zma0snnyjo08gg0f6d9apv2my1c7fqpvca6fiuo144rkqmh1apo2x0dsk2axk44woyun033x5snthfdml1s6erufmugk5fsn3nqyt785jz4qk893h3kglquozex5ftfbr3m2a74cp4orgrkqlft79emfe7pj7qjz146iutavrfv3pyn3wp7sn36xn322amcmacv8ugynql8b6n98ygm8o2azsspkkfl475tx999ks4j2hfjakzm35o0snynqpg82cmnp2i2f3ja3yrjmf8z5c7lmlkntu99losyb3rnd82pjbabmr5eoynp30or2npq1sia90rkcged33ggx9259b6zp0zfgnto3kh65yqjr7qp4us0dwtrwcj9ylud9cs77cr7llyhsc5hodjq89umsira7e1vgit9og5liakpvup8qz6cgol1jbwmjkhe1ztb5j2zizfax0pgjvimuwm4bvgq9yot5odfmilccgmsj1ja7lzsjex76ff289pgo6s55fov5afigtm8kvtfc9t3j748370iw6os9pevo2yzafbzu4gy80y2m2galtzdyno5a8vaw3ab2jt3tq99hkhg308s9dxy5r7dfgx7ykxi3qfebz42nut6dtiyjubuj70qnpeff6hwnmgtodlixhdespeadedh190ku34fq57awueg0wlk1kq5jn5fdgowl5eds12hzqz7z8kcicd6j83i1nw8pv98ho7nvmk5mnkgi0bqntoxyx1mfprypq35p0m8ksbzsnjwfk4qj0ktzr08u4y9d8hc3mi3920lp3p9utw1sfd0qla1yy2it2j2a25nl4x3kodcv3l3a3vu9af3i14m2gtg0leply9fxef0q6d1xfl9k6lnbom8km4w8quyd2gaduylyxv64hhbxfb3mgeroe2hdojw7i7i15vqwqkqy11o0x5jdwtqzx3yxebidvx16c9nsk4f8pmpiv8a0mwvjfxz6n8j87jm3gsz8lrp9fsq51zqtsl2vc96byjunus6hfd0wvkwuxy8jg3sel4xrrwrkrnagvxnka3ybigeqdeoug2sy0b23mcnnepkohv3kgz1yhyl9fh4axt6ha25edotb01ft38ax6kvrz7fqshmvpyz5m7xqiwg82xexd8u5005vt1s8giqzpknwg1ohqucwk1vmea1t6pqr8zxrhudikjlklg3fz1ebnbhq5c4kxwh7l665xx70fp904pm1ttkn5hd1ws5cjr754q61yyw52k2hmk36kmtalr0eo88vmnmsbwb0kte6ceh6sf33sp7a8qy3uye0xnmscxa4woyysh3k664riuibkq5kdm8g7lq7eqybbtitbvo0l5yrnot6w387an0shr1wc0rrlhv5iuld69r3iu4f1jblgdgkusn1g1hxts7mxg6sf6gx7g3aq0gtb99o2aduup9s6xkf20ea6wnmfu9izwlhgcgee6hooozb4hrncb4vqvqlpknf3cc5eyqjop2ilta9oi2l1lfp9trxfydfcyefae8a51x7nfcgh53d7zpbqihvdselutml719j1oouh5shn3ho7z8y2fqw765rrt7u63c5zzlqj4whadfocehfzcwhekbau7dek7lynm8vzjwplp8zim1dyeiqn8c5rmqjehsm2cjlthaqlucnn2g99jo5c95e2vbbpw54d9tjsrbdounh4c1v0445mxtdng3o0qdn05xubko5vetqqjctfntr5nkx1hs08tv38bj8h0793s0fnew226wafj3y81bf53irv28z3rkfe6bei4o4farbz0p2zbxxkje3ajgp8nparsf2ms9raovpo6e416qrn2ssxmd6c3ud5w534w6b63w6ewsaf3ku04xs96xlsm73oodm7xb61wrmhs2f2cqoykf2hio80e1mq38173znkyht0i5ezac3jq1nbs4bbjgxp2y5zx9r4pckmgl6oq14bem2bkyu0z9v4t8jfcgc8eensjzrqvauz0oecm6e2z5k2ep0o3ac1n7qmd87hu9wyx7jr7tbv12e5mtre4lmul3kaijbvwxz5cnij792zhwh31254ef5f0xdo9p9elk5v33wqkmjr5ijke7deujg4hd10mbma987mwhfsqcx7sn55v8x5h1xyvfkndgt0g1qmlcc8z1xpp73m66ihdj3md9uyl0e7098e2juz4xm71csqm4fr4seyhon06ajhh4u1764zhr5ied5ex2u5vynlec6k76392plwifawt8hcm0ry38mifa1rx7ql0kmrwhkxby38hi4spny5wfjugjw4q9 == \1\6\i\c\t\f\w\x\p\t\x\d\r\b\m\v\d\0\k\s\5\q\5\v\u\g\1\9\x\2\h\m\r\d\z\o\r\9\n\z\c\8\k\v\j\g\5\l\p\7\1\l\2\7\7\x\b\x\h\r\y\8\2\k\o\4\i\6\5\v\c\o\i\o\r\a\9\r\9\i\q\w\z\u\h\e\8\d\v\0\x\x\5\b\z\w\t\q\w\4\x\u\z\u\j\b\n\k\y\u\w\r\x\e\d\i\c\3\9\7\g\d\6\u\w\0\v\w\p\r\7\5\x\u\g\w\c\1\7\7\i\h\w\8\3\e\r\z\k\1\u\p\o\g\r\u\r\y\f\g\r\e\b\d\2\q\9\i\k\z\6\3\o\h\i\h\2\e\0\z\4\t\a\a\2\s\t\x\e\h\x\h\d\u\e\h\t\a\m\h\a\k\2\q\i\e\i\9\0\k\9\v\d\j\4\p\f\0\1\f\0\f\u\o\e\2\0\4\4\o\w\y\6\h\y\h\m\r\u\y\c\u\3\4\p\t\5\s\z\7\g\m\x\n\q\1\q\z\6\l\m\n\8\6\1\d\c\6\j\x\l\z\n\1\d\q\w\m\i\2\s\t\7\3\q\r\z\h\s\y\y\d\3\y\3\6\1\s\x\j\g\i\v\c\r\l\3\4\5\s\y\b\o\f\e\k\v\o\i\j\5\f\4\f\5\o\c\4\q\f\9\n\a\z\q\g\3\4\j\u\7\c\n\u\w\g\k\p\9\c\i\6\t\k\y\9\9\u\e\v\c\o\7\6\w\7\e\d\b\h\k\u\t\x\6\v\k\a\a\7\w\w\w\d\7\2\8\0\z\1\o\q\b\a\0\p\a\h\b\0\i\r\n\w\9\u\d\s\d\p\d\s\h\y\0\6\g\y\3\a\g\3\7\y\f\q\2\3\0\v\v\r\v\h\6\6\v\w\e\p\x\x\y\z\w\7\g\m\n\x\8\0\v\x\2\j\3\1\0\b\2\y\a\m\7\3\c\o\f\r\1\f\g\3\5\p\m\o\z\g\5\u\e\k\u\q\f\p\0\c\w\j\l\d\6\2\v\w\l\h\0\s\k\y\s\o\1\m\u\y\q\j\9\b\o\o\h\8\f\g\7\l\i\e\0\i\k\s\8\y\m\i\1\1\a\d\h\a\p\z\i\3\a\3\9\j\8\m\o\r\7\0\q\7\i\x\r\b\a\r\8\j\j\4\p\3\2\o\6\u\x\w\5\b\8\4\9\u\7\1\1\y\j\v\8\v\h\d\k\u\o\s\v\r\a\k\d\2\m\1\2\v\f\r\r\t\b\m\v\a\w\o\y\5\8\m\0\9\z\e\w\x\r\t\p\f\6\h\d\9\3\e\a\u\d\0\n\k\l\p\5\5\7\v\1\k\l\5\q\e\o\f\r\4\u\s\u\i\6\6\6\n\l\2\z\w\x\g\b\2\6\3\c\w\y\x\z\k\c\g\6\c\u\8\3\b\j\e\7\y\v\h\5\g\2\3\w\s\1\w\h\2\6\e\4\k\c\t\l\c\a\o\8\6\m\t\h\5\1\c\r\i\v\b\x\6\5\8\y\x\l\x\m\o\t\0\q\f\7\p\u\7\i\0\z\h\7\2\f\i\2\4\l\t\m\6\h\b\a\0\v\q\k\e\p\i\4\g\m\i\p\c\j\6\1\j\0\e\p\r\o\s\c\h\z\f\l\f\j\0\e\j\c\v\0\u\j\d\b\o\8\b\a\7\6\k\e\w\w\i\7\0\b\w\i\z\6\1\u\7\8\j\4\y\5\i\z\l\b\2\f\a\s\9\h\9\d\m\h\e\7\v\5\3\2\7\y\s\x\h\j\u\1\1\w\0\y\s\r\4\5\j\i\g\y\l\e\e\t\l\q\p\9\r\o\u\t\j\7\q\z\f\x\q\j\v\7\3\m\3\i\c\1\j\f\5\t\5\d\d\u\4\l\p\y\t\g\t\w\6\d\b\j\e\5\a\o\5\s\7\5\z\s\k\j\u\p\o\9\r\6\4\1\q\j\k\r\k\l\c\o\h\b\6\9\8\h\6\t\3\i\w\y\0\8\j\x\6\2\z\n\7\4\7\j\6\5\5\i\2\a\y\b\x\y\4\x\r\8\b\q\5\v\r\u\h\f\g\q\c\4\9\q\c\1\a\2\o\x\1\w\g\a\v\9\g\a\g\1\7\2\u\7\3\c\j\j\v\g\5\d\q\c\c\r\f\j\k\j\r\j\z\x\c\t\t\m\i\x\g\8\e\r\q\m\g\9\r\w\9\f\g\2\6\6\i\3\d\3\1\j\c\d\x\k\7\e\t\y\0\r\x\i\6\n\e\1\p\r\i\r\2\b\0\1\t\f\b\7\1\4\9\q\7\c\7\8\m\y\k\u\u\s\8\7\y\z\m\d\r\5\z\j\1\3\4\i\m\t\f\2\c\g\y\7\t\v\7\k\n\i\0\5\e\8\8\8\d\k\0\x\a\s\u\y\n\j\4\0\s\8\t\8\a\s\q\1\j\h\y\g\q\k\e\2\q\c\b\s\8\4\b\8\2\7\q\y\e\9\x\i\9\e\h\t\d\l\l\e\c\n\r\h\6\a\7\j\c\l\o\f\a\7\h\q\a\l\b\q\7\z\2\w\9\5\k\u\z\5\w\0\0\v\o\e\b\c\4\g\2\2\u\3\k\v\i\9\d\v\j\x\w\s\b\z\i\q\l\6\e\s\2\p\d\g\p\j\i\j\j\2\r\n\0\v\f\n\o\9\i\4\j\1\i\e\6\v\9\h\w\6\o\n\j\w\b\s\5\u\z\l\y\9\6\o\q\p\4\i\o\n\u\l\x\7\f\x\x\n\i\m\d\7\5\t\i\n\p\h\a\6\x\2\j\2\9\i\k\8\o\o\z\r\1\c\p\a\k\o\y\u\o\y\d\j\c\u\7\c\d\f\o\l\7\a\d\w\5\m\g\m\c\p\w\2\5\i\f\o\x\s\z\m\i\h\v\r\y\2\s\8\b\f\6\y\3\y\z\b\2\9\f\g\d\g\a\2\r\g\u\q\1\z\k\u\0\e\5\g\8\k\e\j\2\p\h\c\2\w\t\m\d\t\m\2\9\o\6\y\n\1\2\y\h\y\k\7\a\b\d\p\8\h\3\i\t\j\8\g\5\4\x\v\j\a\0\h\h\j\y\s\c\f\y\g\r\s\4\f\b\f\j\o\4\4\7\y\q\p\z\5\j\l\v\9\6\n\9\i\s\e\x\8\4\a\4\n\c\n\7\x\4\n\2\b\h\8\e\n\o\i\r\u\d\l\d\7\4\c\5\2\u\f\a\r\m\j\e\l\9\0\3\4\t\o\8\h\k\o\q\r\b\2\o\s\s\z\4\9\c\c\u\a\z\z\k\6\7\k\w\e\h\i\5\9\u\o\f\7\4\q\y\g\f\x\4\8\y\1\q\j\m\i\l\h\a\v\j\y\3\r\b\l\m\5\2\5\2\8\m\n\o\v\z\7\h\h\l\d\o\f\t\i\e\7\s\a\i\0\6\c\u\0\0\g\g\t\q\v\j\p\k\r\l\w\z\2\m\8\2\0\l\d\2\9\m\c\x\h\1\0\4\2\j\n\p\i\r\t\d\6\v\j\e\o\6\w\h\h\s\c\5\i\g\i\7\j\8\f\i\c\d\l\p\j\3\w\c\l\8\9\k\f\q\f\y\0\d\8\e\1\1\o\y\b\i\1\a\i\6\5\n\d\1\0\4\e\7\7\z\k\k\i\z\l\3\n\5\0\7\q\9\v\u\c\5\7\u\c\l\b\5\g\w\5\j\4\d\l\f\g\9\u\7\x\c\d\v\0\k\b\a\p\q\k\c\6\n\6\z\m\a\0\s\n\n\y\j\o\0\8\g\g\0\f\6\d\9\a\p\v\2\m\y\1\c\7\f\q\p\v\c\a\6\f\i\u\o\1\4\4\r\k\q\m\h\1\a\p\o\2\x\0\d\s\k\2\a\x\k\4\4\w\o\y\u\n\0\3\3\x\5\s\n\t\h\f\d\m\l\1\s\6\e\r\u\f\m\u\g\k\5\f\s\n\3\n\q\y\t\7\8\5\j\z\4\q\k\8\9\3\h\3\k\g\l\q\u\o\z\e\x\5\f\t\f\b\r\3\m\2\a\7\4\c\p\4\o\r\g\r\k\q\l\f\t\7\9\e\m\f\e\7\p\j\7\q\j\z\1\4\6\i\u\t\a\v\r\f\v\3\p\y\n\3\w\p\7\s\n\3\6\x\n\3\2\2\a\m\c\m\a\c\v\8\u\g\y\n\q\l\8\b\6\n\9\8\y\g\m\8\o\2\a\z\s\s\p\k\k\f\l\4\7\5\t\x\9\9\9\k\s\4\j\2\h\f\j\a\k\z\m\3\5\o\0\s\n\y\n\q\p\g\8\2\c\m\n\p\2\i\2\f\3\j\a\3\y\r\j\m\f\8\z\5\c\7\l\m\l\k\n\t\u\9\9\l\o\s\y\b\3\r\n\d\8\2\p\j\b\a\b\m\r\5\e\o\y\n\p\3\0\o\r\2\n\p\q\1\s\i\a\9\0\r\k\c\g\e\d\3\3\g\g\x\9\2\5\9\b\6\z\p\0\z\f\g\n\t\o\3\k\h\6\5\y\q\j\r\7\q\p\4\u\s\0\d\w\t\r\w\c\j\9\y\l\u\d\9\c\s\7\7\c\r\7\l\l\y\h\s\c\5\h\o\d\j\q\8\9\u\m\s\i\r\a\7\e\1\v\g\i\t\9\o\g\5\l\i\a\k\p\v\u\p\8\q\z\6\c\g\o\l\1\j\b\w\m\j\k\h\e\1\z\t\b\5\j\2\z\i\z\f\a\x\0\p\g\j\v\i\m\u\w\m\4\b\v\g\q\9\y\o\t\5\o\d\f\m\i\l\c\c\g\m\s\j\1\j\a\7\l\z\s\j\e\x\7\6\f\f\2\8\9\p\g\o\6\s\5\5\f\o\v\5\a\f\i\g\t\m\8\k\v\t\f\c\9\t\3\j\7\4\8\3\7\0\i\w\6\o\s\9\p\e\v\o\2\y\z\a\f\b\z\u\4\g\y\8\0\y\2\m\2\g\a\l\t\z\d\y\n\o\5\a\8\v\a\w\3\a\b\2\j\t\3\t\q\9\9\h\k\h\g\3\0\8\s\9\d\x\y\5\r\7\d\f\g\x\7\y\k\x\i\3\q\f\e\b\z\4\2\n\u\t\6\d\t\i\y\j\u\b\u\j\7\0\q\n\p\e\f\f\6\h\w\n\m\g\t\o\d\l\i\x\h\d\e\s\p\e\a\d\e\d\h\1\9\0\k\u\3\4\f\q\5\7\a\w\u\e\g\0\w\l\k\1\k\q\5\j\n\5\f\d\g\o\w\l\5\e\d\s\1\2\h\z\q\z\7\z\8\k\c\i\c\d\6\j\8\3\i\1\n\w\8\p\v\9\8\h\o\7\n\v\m\k\5\m\n\k\g\i\0\b\q\n\t\o\x\y\x\1\m\f\p\r\y\p\q\3\5\p\0\m\8\k\s\b\z\s\n\j\w\f\k\4\q\j\0\k\t\z\r\0\8\u\4\y\9\d\8\h\c\3\m\i\3\9\2\0\l\p\3\p\9\u\t\w\1\s\f\d\0\q\l\a\1\y\y\2\i\t\2\j\2\a\2\5\n\l\4\x\3\k\o\d\c\v\3\l\3\a\3\v\u\9\a\f\3\i\1\4\m\2\g\t\g\0\l\e\p\l\y\9\f\x\e\f\0\q\6\d\1\x\f\l\9\k\6\l\n\b\o\m\8\k\m\4\w\8\q\u\y\d\2\g\a\d\u\y\l\y\x\v\6\4\h\h\b\x\f\b\3\m\g\e\r\o\e\2\h\d\o\j\w\7\i\7\i\1\5\v\q\w\q\k\q\y\1\1\o\0\x\5\j\d\w\t\q\z\x\3\y\x\e\b\i\d\v\x\1\6\c\9\n\s\k\4\f\8\p\m\p\i\v\8\a\0\m\w\v\j\f\x\z\6\n\8\j\8\7\j\m\3\g\s\z\8\l\r\p\9\f\s\q\5\1\z\q\t\s\l\2\v\c\9\6\b\y\j\u\n\u\s\6\h\f\d\0\w\v\k\w\u\x\y\8\j\g\3\s\e\l\4\x\r\r\w\r\k\r\n\a\g\v\x\n\k\a\3\y\b\i\g\e\q\d\e\o\u\g\2\s\y\0\b\2\3\m\c\n\n\e\p\k\o\h\v\3\k\g\z\1\y\h\y\l\9\f\h\4\a\x\t\6\h\a\2\5\e\d\o\t\b\0\1\f\t\3\8\a\x\6\k\v\r\z\7\f\q\s\h\m\v\p\y\z\5\m\7\x\q\i\w\g\8\2\x\e\x\d\8\u\5\0\0\5\v\t\1\s\8\g\i\q\z\p\k\n\w\g\1\o\h\q\u\c\w\k\1\v\m\e\a\1\t\6\p\q\r\8\z\x\r\h\u\d\i\k\j\l\k\l\g\3\f\z\1\e\b\n\b\h\q\5\c\4\k\x\w\h\7\l\6\6\5\x\x\7\0\f\p\9\0\4\p\m\1\t\t\k\n\5\h\d\1\w\s\5\c\j\r\7\5\4\q\6\1\y\y\w\5\2\k\2\h\m\k\3\6\k\m\t\a\l\r\0\e\o\8\8\v\m\n\m\s\b\w\b\0\k\t\e\6\c\e\h\6\s\f\3\3\s\p\7\a\8\q\y\3\u\y\e\0\x\n\m\s\c\x\a\4\w\o\y\y\s\h\3\k\6\6\4\r\i\u\i\b\k\q\5\k\d\m\8\g\7\l\q\7\e\q\y\b\b\t\i\t\b\v\o\0\l\5\y\r\n\o\t\6\w\3\8\7\a\n\0\s\h\r\1\w\c\0\r\r\l\h\v\5\i\u\l\d\6\9\r\3\i\u\4\f\1\j\b\l\g\d\g\k\u\s\n\1\g\1\h\x\t\s\7\m\x\g\6\s\f\6\g\x\7\g\3\a\q\0\g\t\b\9\9\o\2\a\d\u\u\p\9\s\6\x\k\f\2\0\e\a\6\w\n\m\f\u\9\i\z\w\l\h\g\c\g\e\e\6\h\o\o\o\z\b\4\h\r\n\c\b\4\v\q\v\q\l\p\k\n\f\3\c\c\5\e\y\q\j\o\p\2\i\l\t\a\9\o\i\2\l\1\l\f\p\9\t\r\x\f\y\d\f\c\y\e\f\a\e\8\a\5\1\x\7\n\f\c\g\h\5\3\d\7\z\p\b\q\i\h\v\d\s\e\l\u\t\m\l\7\1\9\j\1\o\o\u\h\5\s\h\n\3\h\o\7\z\8\y\2\f\q\w\7\6\5\r\r\t\7\u\6\3\c\5\z\z\l\q\j\4\w\h\a\d\f\o\c\e\h\f\z\c\w\h\e\k\b\a\u\7\d\e\k\7\l\y\n\m\8\v\z\j\w\p\l\p\8\z\i\m\1\d\y\e\i\q\n\8\c\5\r\m\q\j\e\h\s\m\2\c\j\l\t\h\a\q\l\u\c\n\n\2\g\9\9\j\o\5\c\9\5\e\2\v\b\b\p\w\5\4\d\9\t\j\s\r\b\d\o\u\n\h\4\c\1\v\0\4\4\5\m\x\t\d\n\g\3\o\0\q\d\n\0\5\x\u\b\k\o\5\v\e\t\q\q\j\c\t\f\n\t\r\5\n\k\x\1\h\s\0\8\t\v\3\8\b\j\8\h\0\7\9\3\s\0\f\n\e\w\2\2\6\w\a\f\j\3\y\8\1\b\f\5\3\i\r\v\2\8\z\3\r\k\f\e\6\b\e\i\4\o\4\f\a\r\b\z\0\p\2\z\b\x\x\k\j\e\3\a\j\g\p\8\n\p\a\r\s\f\2\m\s\9\r\a\o\v\p\o\6\e\4\1\6\q\r\n\2\s\s\x\m\d\6\c\3\u\d\5\w\5\3\4\w\6\b\6\3\w\6\e\w\s\a\f\3\k\u\0\4\x\s\9\6\x\l\s\m\7\3\o\o\d\m\7\x\b\6\1\w\r\m\h\s\2\f\2\c\q\o\y\k\f\2\h\i\o\8\0\e\1\m\q\3\8\1\7\3\z\n\k\y\h\t\0\i\5\e\z\a\c\3\j\q\1\n\b\s\4\b\b\j\g\x\p\2\y\5\z\x\9\r\4\p\c\k\m\g\l\6\o\q\1\4\b\e\m\2\b\k\y\u\0\z\9\v\4\t\8\j\f\c\g\c\8\e\e\n\s\j\z\r\q\v\a\u\z\0\o\e\c\m\6\e\2\z\5\k\2\e\p\0\o\3\a\c\1\n\7\q\m\d\8\7\h\u\9\w\y\x\7\j\r\7\t\b\v\1\2\e\5\m\t\r\e\4\l\m\u\l\3\k\a\i\j\b\v\w\x\z\5\c\n\i\j\7\9\2\z\h\w\h\3\1\2\5\4\e\f\5\f\0\x\d\o\9\p\9\e\l\k\5\v\3\3\w\q\k\m\j\r\5\i\j\k\e\7\d\e\u\j\g\4\h\d\1\0\m\b\m\a\9\8\7\m\w\h\f\s\q\c\x\7\s\n\5\5\v\8\x\5\h\1\x\y\v\f\k\n\d\g\t\0\g\1\q\m\l\c\c\8\z\1\x\p\p\7\3\m\6\6\i\h\d\j\3\m\d\9\u\y\l\0\e\7\0\9\8\e\2\j\u\z\4\x\m\7\1\c\s\q\m\4\f\r\4\s\e\y\h\o\n\0\6\a\j\h\h\4\u\1\7\6\4\z\h\r\5\i\e\d\5\e\x\2\u\5\v\y\n\l\e\c\6\k\7\6\3\9\2\p\l\w\i\f\a\w\t\8\h\c\m\0\r\y\3\8\m\i\f\a\1\r\x\7\q\l\0\k\m\r\w\h\k\x\b\y\3\8\h\i\4\s\p\n\y\5\w\f\j\u\g\j\w\4\q\9 ]] 00:28:26.806 00:28:26.806 real 0m1.988s 00:28:26.806 user 0m1.269s 00:28:26.806 sys 0m0.550s 00:28:26.806 05:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.806 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 ************************************ 00:28:26.806 END TEST dd_rw_offset 00:28:26.806 ************************************ 00:28:26.806 05:10:56 -- dd/basic_rw.sh@1 -- # cleanup 00:28:26.806 05:10:56 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:28:26.806 05:10:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:26.806 05:10:56 -- dd/common.sh@11 -- # local nvme_ref= 00:28:26.806 05:10:56 -- dd/common.sh@12 -- # local size=0xffff 00:28:26.806 05:10:56 -- dd/common.sh@14 -- # local bs=1048576 00:28:26.806 05:10:56 -- dd/common.sh@15 -- # local count=1 00:28:26.806 05:10:56 -- dd/common.sh@18 -- # gen_conf 00:28:26.806 05:10:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:26.806 05:10:56 -- dd/common.sh@31 -- # xtrace_disable 00:28:26.806 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 { 00:28:26.806 "subsystems": [ 00:28:26.806 { 00:28:26.806 "subsystem": "bdev", 00:28:26.806 "config": [ 00:28:26.806 { 00:28:26.806 "params": { 00:28:26.806 "trtype": "pcie", 00:28:26.806 "traddr": "0000:00:06.0", 00:28:26.806 "name": "Nvme0" 00:28:26.806 }, 00:28:26.806 "method": "bdev_nvme_attach_controller" 00:28:26.806 }, 00:28:26.806 { 00:28:26.806 "method": "bdev_wait_for_examine" 00:28:26.806 } 00:28:26.806 ] 00:28:26.806 } 00:28:26.806 ] 00:28:26.806 } 00:28:26.806 [2024-04-27 05:10:56.544124] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:26.806 [2024-04-27 05:10:56.544424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146252 ] 00:28:26.806 [2024-04-27 05:10:56.713826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.065 [2024-04-27 05:10:56.841107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.583  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:27.583 00:28:27.583 05:10:57 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:27.583 ************************************ 00:28:27.583 END TEST spdk_dd_basic_rw 00:28:27.583 ************************************ 00:28:27.583 00:28:27.583 real 0m25.390s 00:28:27.583 user 0m16.960s 00:28:27.583 sys 0m6.695s 00:28:27.583 05:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.583 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.842 05:10:57 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:27.842 05:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:27.842 05:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.842 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.842 ************************************ 00:28:27.842 START TEST spdk_dd_posix 00:28:27.842 ************************************ 00:28:27.842 05:10:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:27.842 * Looking for test storage... 00:28:27.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:27.842 05:10:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.842 05:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.842 05:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.842 05:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.842 05:10:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.843 05:10:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.843 05:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.843 05:10:57 -- paths/export.sh@5 -- # export PATH 00:28:27.843 05:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:27.843 05:10:57 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:28:27.843 05:10:57 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:28:27.843 05:10:57 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:28:27.843 05:10:57 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:28:27.843 05:10:57 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:27.843 05:10:57 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:27.843 05:10:57 -- dd/posix.sh@130 -- # tests 00:28:27.843 05:10:57 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:28:27.843 * First test run, using AIO 00:28:27.843 05:10:57 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:28:27.843 05:10:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:27.843 05:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.843 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.843 ************************************ 00:28:27.843 START TEST dd_flag_append 00:28:27.843 ************************************ 00:28:27.843 05:10:57 -- common/autotest_common.sh@1104 -- # append 00:28:27.843 05:10:57 -- dd/posix.sh@16 -- # local dump0 00:28:27.843 05:10:57 -- dd/posix.sh@17 -- # local dump1 00:28:27.843 05:10:57 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:27.843 05:10:57 -- dd/common.sh@98 -- # xtrace_disable 00:28:27.843 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.843 05:10:57 -- dd/posix.sh@19 -- # dump0=j0xw6nfd4g889sag7f4tofdu64zd8j05 00:28:27.843 05:10:57 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:27.843 05:10:57 -- dd/common.sh@98 -- # xtrace_disable 00:28:27.843 05:10:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.843 05:10:57 -- dd/posix.sh@20 -- # dump1=6ow0fh0ci24ks7m566r6w744uy9fi8mw 00:28:27.843 05:10:57 -- dd/posix.sh@22 -- # printf %s j0xw6nfd4g889sag7f4tofdu64zd8j05 00:28:27.843 05:10:57 -- dd/posix.sh@23 -- # printf %s 6ow0fh0ci24ks7m566r6w744uy9fi8mw 00:28:27.843 05:10:57 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:27.843 [2024-04-27 05:10:57.675946] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:27.843 [2024-04-27 05:10:57.676219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146321 ] 00:28:28.102 [2024-04-27 05:10:57.845967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.102 [2024-04-27 05:10:57.958334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.620  Copying: 32/32 [B] (average 31 kBps) 00:28:28.620 00:28:28.620 05:10:58 -- dd/posix.sh@27 -- # [[ 6ow0fh0ci24ks7m566r6w744uy9fi8mwj0xw6nfd4g889sag7f4tofdu64zd8j05 == \6\o\w\0\f\h\0\c\i\2\4\k\s\7\m\5\6\6\r\6\w\7\4\4\u\y\9\f\i\8\m\w\j\0\x\w\6\n\f\d\4\g\8\8\9\s\a\g\7\f\4\t\o\f\d\u\6\4\z\d\8\j\0\5 ]] 00:28:28.620 00:28:28.620 real 0m0.856s 00:28:28.620 user 0m0.464s 00:28:28.620 sys 0m0.253s 00:28:28.620 ************************************ 00:28:28.620 END TEST dd_flag_append 00:28:28.620 ************************************ 00:28:28.620 05:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.620 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.620 05:10:58 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:28:28.620 05:10:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:28.620 05:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:28.620 05:10:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.620 ************************************ 00:28:28.620 START TEST dd_flag_directory 00:28:28.620 ************************************ 00:28:28.620 05:10:58 -- common/autotest_common.sh@1104 -- # directory 00:28:28.620 05:10:58 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:28.620 05:10:58 -- common/autotest_common.sh@640 -- # local es=0 00:28:28.620 05:10:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:28.620 05:10:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.620 05:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:28.620 05:10:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.620 05:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:28.620 05:10:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.620 05:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:28.620 05:10:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.620 05:10:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:28.620 05:10:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:28.879 [2024-04-27 05:10:58.588232] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:28.879 [2024-04-27 05:10:58.589244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146364 ] 00:28:28.879 [2024-04-27 05:10:58.757840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.138 [2024-04-27 05:10:58.879692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.138 [2024-04-27 05:10:59.002431] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:29.138 [2024-04-27 05:10:59.002543] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:29.138 [2024-04-27 05:10:59.002590] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:29.397 [2024-04-27 05:10:59.191635] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:29.655 05:10:59 -- common/autotest_common.sh@643 -- # es=236 00:28:29.655 05:10:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:29.655 05:10:59 -- common/autotest_common.sh@652 -- # es=108 00:28:29.655 05:10:59 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:29.655 05:10:59 -- common/autotest_common.sh@660 -- # es=1 00:28:29.655 05:10:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:29.655 05:10:59 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:29.655 05:10:59 -- common/autotest_common.sh@640 -- # local es=0 00:28:29.655 05:10:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:29.655 05:10:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.655 05:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:29.655 05:10:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.655 05:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:29.655 05:10:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.655 05:10:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:29.655 05:10:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:29.655 05:10:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:29.655 05:10:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:29.655 [2024-04-27 05:10:59.416035] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:29.655 [2024-04-27 05:10:59.416316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146384 ] 00:28:29.914 [2024-04-27 05:10:59.588890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.914 [2024-04-27 05:10:59.693719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.914 [2024-04-27 05:10:59.814550] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:29.914 [2024-04-27 05:10:59.814651] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:29.914 [2024-04-27 05:10:59.814711] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:30.173 [2024-04-27 05:11:00.014007] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:30.431 05:11:00 -- common/autotest_common.sh@643 -- # es=236 00:28:30.431 05:11:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:30.431 05:11:00 -- common/autotest_common.sh@652 -- # es=108 00:28:30.431 05:11:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:30.431 05:11:00 -- common/autotest_common.sh@660 -- # es=1 00:28:30.431 05:11:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:30.431 00:28:30.431 real 0m1.656s 00:28:30.431 user 0m0.936s 00:28:30.431 sys 0m0.516s 00:28:30.431 05:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.431 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.431 ************************************ 00:28:30.431 END TEST dd_flag_directory 00:28:30.431 ************************************ 00:28:30.431 05:11:00 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:28:30.431 05:11:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:30.431 05:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:30.431 05:11:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.431 ************************************ 00:28:30.431 START TEST dd_flag_nofollow 00:28:30.431 ************************************ 00:28:30.431 05:11:00 -- common/autotest_common.sh@1104 -- # nofollow 00:28:30.431 05:11:00 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:30.431 05:11:00 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:30.431 05:11:00 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:30.431 05:11:00 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:30.431 05:11:00 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:30.431 05:11:00 -- common/autotest_common.sh@640 -- # local es=0 00:28:30.432 05:11:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:30.432 05:11:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.432 05:11:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:30.432 05:11:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.432 05:11:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:30.432 05:11:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.432 05:11:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:30.432 05:11:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.432 05:11:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:30.432 05:11:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:30.432 [2024-04-27 05:11:00.310218] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:30.432 [2024-04-27 05:11:00.310577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146418 ] 00:28:30.690 [2024-04-27 05:11:00.480926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.690 [2024-04-27 05:11:00.599944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.948 [2024-04-27 05:11:00.723653] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:30.948 [2024-04-27 05:11:00.723758] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:30.948 [2024-04-27 05:11:00.723811] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:31.207 [2024-04-27 05:11:00.907888] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:31.207 05:11:01 -- common/autotest_common.sh@643 -- # es=216 00:28:31.207 05:11:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:31.207 05:11:01 -- common/autotest_common.sh@652 -- # es=88 00:28:31.207 05:11:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:31.207 05:11:01 -- common/autotest_common.sh@660 -- # es=1 00:28:31.207 05:11:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:31.207 05:11:01 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:31.207 05:11:01 -- common/autotest_common.sh@640 -- # local es=0 00:28:31.207 05:11:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:31.207 05:11:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:31.207 05:11:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:31.207 05:11:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:31.207 05:11:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:31.207 05:11:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:31.207 05:11:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:31.207 05:11:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:31.207 05:11:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:31.207 05:11:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:31.208 [2024-04-27 05:11:01.120054] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:31.208 [2024-04-27 05:11:01.120315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146430 ] 00:28:31.467 [2024-04-27 05:11:01.289814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.726 [2024-04-27 05:11:01.390145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.726 [2024-04-27 05:11:01.519367] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:31.726 [2024-04-27 05:11:01.519521] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:31.726 [2024-04-27 05:11:01.519582] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:31.986 [2024-04-27 05:11:01.702602] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:31.986 05:11:01 -- common/autotest_common.sh@643 -- # es=216 00:28:31.986 05:11:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:31.986 05:11:01 -- common/autotest_common.sh@652 -- # es=88 00:28:31.986 05:11:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:31.986 05:11:01 -- common/autotest_common.sh@660 -- # es=1 00:28:31.986 05:11:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:31.986 05:11:01 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:31.986 05:11:01 -- dd/common.sh@98 -- # xtrace_disable 00:28:31.986 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:28:31.986 05:11:01 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:32.245 [2024-04-27 05:11:01.930269] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:32.245 [2024-04-27 05:11:01.930557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146446 ] 00:28:32.245 [2024-04-27 05:11:02.100770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.504 [2024-04-27 05:11:02.207061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.763  Copying: 512/512 [B] (average 500 kBps) 00:28:32.763 00:28:33.021 05:11:02 -- dd/posix.sh@49 -- # [[ 5an4youl9yqacqlkwzdwyf0g3inm72ux22zclqbf2fc5zjuo5p6rrpzcacsn9cswtpvgn96rb86t3uv2qpb0g8x5edaicsbso381va73gxg04nfspvjp25ygok2cf08qm5t4my5gp98cadvr2n8s1njdkc3zu5sgpr6xujm5y9bud6k1gfbfsh0dn1jymv1ttlfifgnyexrlxs45da41pcxsq8sbaiknl066kzgscfox3urp8it5nygnznlgqjekap2c4rc5t2uka3gx78k8xojnvhk1yhjfujqcn3ohkwkjzk4okqcz4nphl9ptxbpf4p2ik5nwmw7b52ld71g8h2monbb3i91yettcqcjgk3jjwkf2xxnhbqjb8g8fzh968rmzb1gy7j4sm00j0l2zk11renls0il1nafpnevu36gx5b8rviuq5tw02pef14uz9iern5gmvgsahd79cbj4dn4mi3vxvr3wms0zsbjtno6vfvqs6421c8766rdbnhrj == \5\a\n\4\y\o\u\l\9\y\q\a\c\q\l\k\w\z\d\w\y\f\0\g\3\i\n\m\7\2\u\x\2\2\z\c\l\q\b\f\2\f\c\5\z\j\u\o\5\p\6\r\r\p\z\c\a\c\s\n\9\c\s\w\t\p\v\g\n\9\6\r\b\8\6\t\3\u\v\2\q\p\b\0\g\8\x\5\e\d\a\i\c\s\b\s\o\3\8\1\v\a\7\3\g\x\g\0\4\n\f\s\p\v\j\p\2\5\y\g\o\k\2\c\f\0\8\q\m\5\t\4\m\y\5\g\p\9\8\c\a\d\v\r\2\n\8\s\1\n\j\d\k\c\3\z\u\5\s\g\p\r\6\x\u\j\m\5\y\9\b\u\d\6\k\1\g\f\b\f\s\h\0\d\n\1\j\y\m\v\1\t\t\l\f\i\f\g\n\y\e\x\r\l\x\s\4\5\d\a\4\1\p\c\x\s\q\8\s\b\a\i\k\n\l\0\6\6\k\z\g\s\c\f\o\x\3\u\r\p\8\i\t\5\n\y\g\n\z\n\l\g\q\j\e\k\a\p\2\c\4\r\c\5\t\2\u\k\a\3\g\x\7\8\k\8\x\o\j\n\v\h\k\1\y\h\j\f\u\j\q\c\n\3\o\h\k\w\k\j\z\k\4\o\k\q\c\z\4\n\p\h\l\9\p\t\x\b\p\f\4\p\2\i\k\5\n\w\m\w\7\b\5\2\l\d\7\1\g\8\h\2\m\o\n\b\b\3\i\9\1\y\e\t\t\c\q\c\j\g\k\3\j\j\w\k\f\2\x\x\n\h\b\q\j\b\8\g\8\f\z\h\9\6\8\r\m\z\b\1\g\y\7\j\4\s\m\0\0\j\0\l\2\z\k\1\1\r\e\n\l\s\0\i\l\1\n\a\f\p\n\e\v\u\3\6\g\x\5\b\8\r\v\i\u\q\5\t\w\0\2\p\e\f\1\4\u\z\9\i\e\r\n\5\g\m\v\g\s\a\h\d\7\9\c\b\j\4\d\n\4\m\i\3\v\x\v\r\3\w\m\s\0\z\s\b\j\t\n\o\6\v\f\v\q\s\6\4\2\1\c\8\7\6\6\r\d\b\n\h\r\j ]] 00:28:33.021 00:28:33.021 real 0m2.454s 00:28:33.021 user 0m1.361s 00:28:33.021 sys 0m0.758s 00:28:33.021 05:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.021 05:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:33.021 ************************************ 00:28:33.021 END TEST dd_flag_nofollow 00:28:33.021 ************************************ 00:28:33.021 05:11:02 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:28:33.021 05:11:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:33.021 05:11:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.021 05:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:33.021 ************************************ 00:28:33.021 START TEST dd_flag_noatime 00:28:33.021 ************************************ 00:28:33.021 05:11:02 -- common/autotest_common.sh@1104 -- # noatime 00:28:33.021 05:11:02 -- dd/posix.sh@53 -- # local atime_if 00:28:33.021 05:11:02 -- dd/posix.sh@54 -- # local atime_of 00:28:33.021 05:11:02 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:33.021 05:11:02 -- dd/common.sh@98 -- # xtrace_disable 00:28:33.021 05:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:33.021 05:11:02 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:33.021 05:11:02 -- dd/posix.sh@60 -- # atime_if=1714194662 00:28:33.022 05:11:02 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:33.022 05:11:02 -- dd/posix.sh@61 -- # atime_of=1714194662 00:28:33.022 05:11:02 -- dd/posix.sh@66 -- # sleep 1 00:28:33.983 05:11:03 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:33.983 [2024-04-27 05:11:03.835279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:33.983 [2024-04-27 05:11:03.835616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146499 ] 00:28:34.241 [2024-04-27 05:11:04.006571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.241 [2024-04-27 05:11:04.143713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.065  Copying: 512/512 [B] (average 500 kBps) 00:28:35.065 00:28:35.065 05:11:04 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:35.065 05:11:04 -- dd/posix.sh@69 -- # (( atime_if == 1714194662 )) 00:28:35.065 05:11:04 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:35.065 05:11:04 -- dd/posix.sh@70 -- # (( atime_of == 1714194662 )) 00:28:35.065 05:11:04 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:35.065 [2024-04-27 05:11:04.779873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:35.065 [2024-04-27 05:11:04.780173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146518 ] 00:28:35.065 [2024-04-27 05:11:04.953111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.322 [2024-04-27 05:11:05.081489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.890  Copying: 512/512 [B] (average 500 kBps) 00:28:35.890 00:28:35.890 05:11:05 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:35.890 05:11:05 -- dd/posix.sh@73 -- # (( atime_if < 1714194665 )) 00:28:35.890 00:28:35.890 real 0m2.902s 00:28:35.890 user 0m1.123s 00:28:35.890 sys 0m0.506s 00:28:35.890 05:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.890 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:28:35.890 ************************************ 00:28:35.890 END TEST dd_flag_noatime 00:28:35.890 ************************************ 00:28:35.890 05:11:05 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:28:35.890 05:11:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:35.890 05:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:35.890 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:28:35.890 ************************************ 00:28:35.890 START TEST dd_flags_misc 00:28:35.890 ************************************ 00:28:35.890 05:11:05 -- common/autotest_common.sh@1104 -- # io 00:28:35.890 05:11:05 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:35.890 05:11:05 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:35.890 05:11:05 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:35.890 05:11:05 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:35.890 05:11:05 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:35.890 05:11:05 -- dd/common.sh@98 -- # xtrace_disable 00:28:35.890 05:11:05 -- common/autotest_common.sh@10 -- # set +x 00:28:35.890 05:11:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:35.890 05:11:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:35.890 [2024-04-27 05:11:05.785738] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:35.890 [2024-04-27 05:11:05.786266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146555 ] 00:28:36.148 [2024-04-27 05:11:05.957704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.149 [2024-04-27 05:11:06.047078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.667  Copying: 512/512 [B] (average 500 kBps) 00:28:36.667 00:28:36.667 05:11:06 -- dd/posix.sh@93 -- # [[ yd11l54fh1wot2pi5emlcbei98qxtqz2jrs9318oawsokdj76ylv52gf0408tn8v8b73pdyjr11dmap5yjsque2dem0kivjls1rnsqk9trmxkr70grghk3t07qezhpaq84h7uhlqr2ns4v91vcyxqlt24rarcy5iidsp23ic32i41i00fcrr4ftkqac2qyjwls4ht4rfvulr1uagrwr3pspq6rwn3ktbeiqj3hfu4rcxqs9vzgubw0wu5bnuo13otkupkeuaqc1oaxfdl9i5saxagzgy1m5t4zlpbvznpad4gh06riweu56rwzc6ig0has8o3m11t2hbs4ssd6zmz3yjoofop7e0km6prznd9w5bws6ftrfs64a9dsikmeusqr32i3rw33wduophfzprnl8b7ahqaj3fk4vcxl7ts9xthax83milbi5l30p9siuhm08t3e8jcc7docnj1hevsf3nkn6gfv57agt23ga9cldcj2izhcrejzbo1dmtzwqn == \y\d\1\1\l\5\4\f\h\1\w\o\t\2\p\i\5\e\m\l\c\b\e\i\9\8\q\x\t\q\z\2\j\r\s\9\3\1\8\o\a\w\s\o\k\d\j\7\6\y\l\v\5\2\g\f\0\4\0\8\t\n\8\v\8\b\7\3\p\d\y\j\r\1\1\d\m\a\p\5\y\j\s\q\u\e\2\d\e\m\0\k\i\v\j\l\s\1\r\n\s\q\k\9\t\r\m\x\k\r\7\0\g\r\g\h\k\3\t\0\7\q\e\z\h\p\a\q\8\4\h\7\u\h\l\q\r\2\n\s\4\v\9\1\v\c\y\x\q\l\t\2\4\r\a\r\c\y\5\i\i\d\s\p\2\3\i\c\3\2\i\4\1\i\0\0\f\c\r\r\4\f\t\k\q\a\c\2\q\y\j\w\l\s\4\h\t\4\r\f\v\u\l\r\1\u\a\g\r\w\r\3\p\s\p\q\6\r\w\n\3\k\t\b\e\i\q\j\3\h\f\u\4\r\c\x\q\s\9\v\z\g\u\b\w\0\w\u\5\b\n\u\o\1\3\o\t\k\u\p\k\e\u\a\q\c\1\o\a\x\f\d\l\9\i\5\s\a\x\a\g\z\g\y\1\m\5\t\4\z\l\p\b\v\z\n\p\a\d\4\g\h\0\6\r\i\w\e\u\5\6\r\w\z\c\6\i\g\0\h\a\s\8\o\3\m\1\1\t\2\h\b\s\4\s\s\d\6\z\m\z\3\y\j\o\o\f\o\p\7\e\0\k\m\6\p\r\z\n\d\9\w\5\b\w\s\6\f\t\r\f\s\6\4\a\9\d\s\i\k\m\e\u\s\q\r\3\2\i\3\r\w\3\3\w\d\u\o\p\h\f\z\p\r\n\l\8\b\7\a\h\q\a\j\3\f\k\4\v\c\x\l\7\t\s\9\x\t\h\a\x\8\3\m\i\l\b\i\5\l\3\0\p\9\s\i\u\h\m\0\8\t\3\e\8\j\c\c\7\d\o\c\n\j\1\h\e\v\s\f\3\n\k\n\6\g\f\v\5\7\a\g\t\2\3\g\a\9\c\l\d\c\j\2\i\z\h\c\r\e\j\z\b\o\1\d\m\t\z\w\q\n ]] 00:28:36.667 05:11:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:36.667 05:11:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:36.926 [2024-04-27 05:11:06.635486] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:36.926 [2024-04-27 05:11:06.635782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146570 ] 00:28:36.926 [2024-04-27 05:11:06.805819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.185 [2024-04-27 05:11:06.932194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.753  Copying: 512/512 [B] (average 500 kBps) 00:28:37.753 00:28:37.753 05:11:07 -- dd/posix.sh@93 -- # [[ yd11l54fh1wot2pi5emlcbei98qxtqz2jrs9318oawsokdj76ylv52gf0408tn8v8b73pdyjr11dmap5yjsque2dem0kivjls1rnsqk9trmxkr70grghk3t07qezhpaq84h7uhlqr2ns4v91vcyxqlt24rarcy5iidsp23ic32i41i00fcrr4ftkqac2qyjwls4ht4rfvulr1uagrwr3pspq6rwn3ktbeiqj3hfu4rcxqs9vzgubw0wu5bnuo13otkupkeuaqc1oaxfdl9i5saxagzgy1m5t4zlpbvznpad4gh06riweu56rwzc6ig0has8o3m11t2hbs4ssd6zmz3yjoofop7e0km6prznd9w5bws6ftrfs64a9dsikmeusqr32i3rw33wduophfzprnl8b7ahqaj3fk4vcxl7ts9xthax83milbi5l30p9siuhm08t3e8jcc7docnj1hevsf3nkn6gfv57agt23ga9cldcj2izhcrejzbo1dmtzwqn == \y\d\1\1\l\5\4\f\h\1\w\o\t\2\p\i\5\e\m\l\c\b\e\i\9\8\q\x\t\q\z\2\j\r\s\9\3\1\8\o\a\w\s\o\k\d\j\7\6\y\l\v\5\2\g\f\0\4\0\8\t\n\8\v\8\b\7\3\p\d\y\j\r\1\1\d\m\a\p\5\y\j\s\q\u\e\2\d\e\m\0\k\i\v\j\l\s\1\r\n\s\q\k\9\t\r\m\x\k\r\7\0\g\r\g\h\k\3\t\0\7\q\e\z\h\p\a\q\8\4\h\7\u\h\l\q\r\2\n\s\4\v\9\1\v\c\y\x\q\l\t\2\4\r\a\r\c\y\5\i\i\d\s\p\2\3\i\c\3\2\i\4\1\i\0\0\f\c\r\r\4\f\t\k\q\a\c\2\q\y\j\w\l\s\4\h\t\4\r\f\v\u\l\r\1\u\a\g\r\w\r\3\p\s\p\q\6\r\w\n\3\k\t\b\e\i\q\j\3\h\f\u\4\r\c\x\q\s\9\v\z\g\u\b\w\0\w\u\5\b\n\u\o\1\3\o\t\k\u\p\k\e\u\a\q\c\1\o\a\x\f\d\l\9\i\5\s\a\x\a\g\z\g\y\1\m\5\t\4\z\l\p\b\v\z\n\p\a\d\4\g\h\0\6\r\i\w\e\u\5\6\r\w\z\c\6\i\g\0\h\a\s\8\o\3\m\1\1\t\2\h\b\s\4\s\s\d\6\z\m\z\3\y\j\o\o\f\o\p\7\e\0\k\m\6\p\r\z\n\d\9\w\5\b\w\s\6\f\t\r\f\s\6\4\a\9\d\s\i\k\m\e\u\s\q\r\3\2\i\3\r\w\3\3\w\d\u\o\p\h\f\z\p\r\n\l\8\b\7\a\h\q\a\j\3\f\k\4\v\c\x\l\7\t\s\9\x\t\h\a\x\8\3\m\i\l\b\i\5\l\3\0\p\9\s\i\u\h\m\0\8\t\3\e\8\j\c\c\7\d\o\c\n\j\1\h\e\v\s\f\3\n\k\n\6\g\f\v\5\7\a\g\t\2\3\g\a\9\c\l\d\c\j\2\i\z\h\c\r\e\j\z\b\o\1\d\m\t\z\w\q\n ]] 00:28:37.753 05:11:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:37.753 05:11:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:37.754 [2024-04-27 05:11:07.554113] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:37.754 [2024-04-27 05:11:07.554396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146587 ] 00:28:38.013 [2024-04-27 05:11:07.725319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.013 [2024-04-27 05:11:07.842483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.841  Copying: 512/512 [B] (average 166 kBps) 00:28:38.841 00:28:38.841 05:11:08 -- dd/posix.sh@93 -- # [[ yd11l54fh1wot2pi5emlcbei98qxtqz2jrs9318oawsokdj76ylv52gf0408tn8v8b73pdyjr11dmap5yjsque2dem0kivjls1rnsqk9trmxkr70grghk3t07qezhpaq84h7uhlqr2ns4v91vcyxqlt24rarcy5iidsp23ic32i41i00fcrr4ftkqac2qyjwls4ht4rfvulr1uagrwr3pspq6rwn3ktbeiqj3hfu4rcxqs9vzgubw0wu5bnuo13otkupkeuaqc1oaxfdl9i5saxagzgy1m5t4zlpbvznpad4gh06riweu56rwzc6ig0has8o3m11t2hbs4ssd6zmz3yjoofop7e0km6prznd9w5bws6ftrfs64a9dsikmeusqr32i3rw33wduophfzprnl8b7ahqaj3fk4vcxl7ts9xthax83milbi5l30p9siuhm08t3e8jcc7docnj1hevsf3nkn6gfv57agt23ga9cldcj2izhcrejzbo1dmtzwqn == \y\d\1\1\l\5\4\f\h\1\w\o\t\2\p\i\5\e\m\l\c\b\e\i\9\8\q\x\t\q\z\2\j\r\s\9\3\1\8\o\a\w\s\o\k\d\j\7\6\y\l\v\5\2\g\f\0\4\0\8\t\n\8\v\8\b\7\3\p\d\y\j\r\1\1\d\m\a\p\5\y\j\s\q\u\e\2\d\e\m\0\k\i\v\j\l\s\1\r\n\s\q\k\9\t\r\m\x\k\r\7\0\g\r\g\h\k\3\t\0\7\q\e\z\h\p\a\q\8\4\h\7\u\h\l\q\r\2\n\s\4\v\9\1\v\c\y\x\q\l\t\2\4\r\a\r\c\y\5\i\i\d\s\p\2\3\i\c\3\2\i\4\1\i\0\0\f\c\r\r\4\f\t\k\q\a\c\2\q\y\j\w\l\s\4\h\t\4\r\f\v\u\l\r\1\u\a\g\r\w\r\3\p\s\p\q\6\r\w\n\3\k\t\b\e\i\q\j\3\h\f\u\4\r\c\x\q\s\9\v\z\g\u\b\w\0\w\u\5\b\n\u\o\1\3\o\t\k\u\p\k\e\u\a\q\c\1\o\a\x\f\d\l\9\i\5\s\a\x\a\g\z\g\y\1\m\5\t\4\z\l\p\b\v\z\n\p\a\d\4\g\h\0\6\r\i\w\e\u\5\6\r\w\z\c\6\i\g\0\h\a\s\8\o\3\m\1\1\t\2\h\b\s\4\s\s\d\6\z\m\z\3\y\j\o\o\f\o\p\7\e\0\k\m\6\p\r\z\n\d\9\w\5\b\w\s\6\f\t\r\f\s\6\4\a\9\d\s\i\k\m\e\u\s\q\r\3\2\i\3\r\w\3\3\w\d\u\o\p\h\f\z\p\r\n\l\8\b\7\a\h\q\a\j\3\f\k\4\v\c\x\l\7\t\s\9\x\t\h\a\x\8\3\m\i\l\b\i\5\l\3\0\p\9\s\i\u\h\m\0\8\t\3\e\8\j\c\c\7\d\o\c\n\j\1\h\e\v\s\f\3\n\k\n\6\g\f\v\5\7\a\g\t\2\3\g\a\9\c\l\d\c\j\2\i\z\h\c\r\e\j\z\b\o\1\d\m\t\z\w\q\n ]] 00:28:38.841 05:11:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:38.841 05:11:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:38.841 [2024-04-27 05:11:08.583972] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:38.841 [2024-04-27 05:11:08.584249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146606 ] 00:28:38.841 [2024-04-27 05:11:08.754064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.101 [2024-04-27 05:11:08.871572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.669  Copying: 512/512 [B] (average 166 kBps) 00:28:39.669 00:28:39.669 05:11:09 -- dd/posix.sh@93 -- # [[ yd11l54fh1wot2pi5emlcbei98qxtqz2jrs9318oawsokdj76ylv52gf0408tn8v8b73pdyjr11dmap5yjsque2dem0kivjls1rnsqk9trmxkr70grghk3t07qezhpaq84h7uhlqr2ns4v91vcyxqlt24rarcy5iidsp23ic32i41i00fcrr4ftkqac2qyjwls4ht4rfvulr1uagrwr3pspq6rwn3ktbeiqj3hfu4rcxqs9vzgubw0wu5bnuo13otkupkeuaqc1oaxfdl9i5saxagzgy1m5t4zlpbvznpad4gh06riweu56rwzc6ig0has8o3m11t2hbs4ssd6zmz3yjoofop7e0km6prznd9w5bws6ftrfs64a9dsikmeusqr32i3rw33wduophfzprnl8b7ahqaj3fk4vcxl7ts9xthax83milbi5l30p9siuhm08t3e8jcc7docnj1hevsf3nkn6gfv57agt23ga9cldcj2izhcrejzbo1dmtzwqn == \y\d\1\1\l\5\4\f\h\1\w\o\t\2\p\i\5\e\m\l\c\b\e\i\9\8\q\x\t\q\z\2\j\r\s\9\3\1\8\o\a\w\s\o\k\d\j\7\6\y\l\v\5\2\g\f\0\4\0\8\t\n\8\v\8\b\7\3\p\d\y\j\r\1\1\d\m\a\p\5\y\j\s\q\u\e\2\d\e\m\0\k\i\v\j\l\s\1\r\n\s\q\k\9\t\r\m\x\k\r\7\0\g\r\g\h\k\3\t\0\7\q\e\z\h\p\a\q\8\4\h\7\u\h\l\q\r\2\n\s\4\v\9\1\v\c\y\x\q\l\t\2\4\r\a\r\c\y\5\i\i\d\s\p\2\3\i\c\3\2\i\4\1\i\0\0\f\c\r\r\4\f\t\k\q\a\c\2\q\y\j\w\l\s\4\h\t\4\r\f\v\u\l\r\1\u\a\g\r\w\r\3\p\s\p\q\6\r\w\n\3\k\t\b\e\i\q\j\3\h\f\u\4\r\c\x\q\s\9\v\z\g\u\b\w\0\w\u\5\b\n\u\o\1\3\o\t\k\u\p\k\e\u\a\q\c\1\o\a\x\f\d\l\9\i\5\s\a\x\a\g\z\g\y\1\m\5\t\4\z\l\p\b\v\z\n\p\a\d\4\g\h\0\6\r\i\w\e\u\5\6\r\w\z\c\6\i\g\0\h\a\s\8\o\3\m\1\1\t\2\h\b\s\4\s\s\d\6\z\m\z\3\y\j\o\o\f\o\p\7\e\0\k\m\6\p\r\z\n\d\9\w\5\b\w\s\6\f\t\r\f\s\6\4\a\9\d\s\i\k\m\e\u\s\q\r\3\2\i\3\r\w\3\3\w\d\u\o\p\h\f\z\p\r\n\l\8\b\7\a\h\q\a\j\3\f\k\4\v\c\x\l\7\t\s\9\x\t\h\a\x\8\3\m\i\l\b\i\5\l\3\0\p\9\s\i\u\h\m\0\8\t\3\e\8\j\c\c\7\d\o\c\n\j\1\h\e\v\s\f\3\n\k\n\6\g\f\v\5\7\a\g\t\2\3\g\a\9\c\l\d\c\j\2\i\z\h\c\r\e\j\z\b\o\1\d\m\t\z\w\q\n ]] 00:28:39.669 05:11:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:39.669 05:11:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:39.669 05:11:09 -- dd/common.sh@98 -- # xtrace_disable 00:28:39.669 05:11:09 -- common/autotest_common.sh@10 -- # set +x 00:28:39.669 05:11:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:39.669 05:11:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:39.669 [2024-04-27 05:11:09.515918] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:39.669 [2024-04-27 05:11:09.516212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146619 ] 00:28:39.928 [2024-04-27 05:11:09.689816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.928 [2024-04-27 05:11:09.810337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.753  Copying: 512/512 [B] (average 500 kBps) 00:28:40.753 00:28:40.753 05:11:10 -- dd/posix.sh@93 -- # [[ qn7yyg3qy2bvcguakkedar6vjvm9pgewcuiev3h882xbv154q6m38ihhajhii0nuweathcw7l0jlbmjaokc8rayd2co0diz2d4v8syfmi9qzorplg6mw0lwnmz7udrdvoqy8z4x1mw4dedxx7td9pt0gjevui1demu9nizeiu5h243ch41ngikw5ozctpht553fxtic2m7htvghqksjyok7smcl0tmkr89imxfyxcoc87sqyt19hp4pn2ec3fufdaasj5owy6yoq80rcj9vpqn1xvfy5xt01eltv668khtl2az9mm6gu7m6tmur106eeu1pizepul7lbncibs4jsujhnr0p47655quw3kvi6zifnt18seaytwjzo4iqf758fk0qeazp5k2ex813td25kahddbvp4e5ukzu5oycqsx98fhhju1vssmkvbig03h9gkunyehewqbmiy7eskkn1yy0o96j3y06scblm394mmc71im8ajpi5kcz688mbpvt8h == \q\n\7\y\y\g\3\q\y\2\b\v\c\g\u\a\k\k\e\d\a\r\6\v\j\v\m\9\p\g\e\w\c\u\i\e\v\3\h\8\8\2\x\b\v\1\5\4\q\6\m\3\8\i\h\h\a\j\h\i\i\0\n\u\w\e\a\t\h\c\w\7\l\0\j\l\b\m\j\a\o\k\c\8\r\a\y\d\2\c\o\0\d\i\z\2\d\4\v\8\s\y\f\m\i\9\q\z\o\r\p\l\g\6\m\w\0\l\w\n\m\z\7\u\d\r\d\v\o\q\y\8\z\4\x\1\m\w\4\d\e\d\x\x\7\t\d\9\p\t\0\g\j\e\v\u\i\1\d\e\m\u\9\n\i\z\e\i\u\5\h\2\4\3\c\h\4\1\n\g\i\k\w\5\o\z\c\t\p\h\t\5\5\3\f\x\t\i\c\2\m\7\h\t\v\g\h\q\k\s\j\y\o\k\7\s\m\c\l\0\t\m\k\r\8\9\i\m\x\f\y\x\c\o\c\8\7\s\q\y\t\1\9\h\p\4\p\n\2\e\c\3\f\u\f\d\a\a\s\j\5\o\w\y\6\y\o\q\8\0\r\c\j\9\v\p\q\n\1\x\v\f\y\5\x\t\0\1\e\l\t\v\6\6\8\k\h\t\l\2\a\z\9\m\m\6\g\u\7\m\6\t\m\u\r\1\0\6\e\e\u\1\p\i\z\e\p\u\l\7\l\b\n\c\i\b\s\4\j\s\u\j\h\n\r\0\p\4\7\6\5\5\q\u\w\3\k\v\i\6\z\i\f\n\t\1\8\s\e\a\y\t\w\j\z\o\4\i\q\f\7\5\8\f\k\0\q\e\a\z\p\5\k\2\e\x\8\1\3\t\d\2\5\k\a\h\d\d\b\v\p\4\e\5\u\k\z\u\5\o\y\c\q\s\x\9\8\f\h\h\j\u\1\v\s\s\m\k\v\b\i\g\0\3\h\9\g\k\u\n\y\e\h\e\w\q\b\m\i\y\7\e\s\k\k\n\1\y\y\0\o\9\6\j\3\y\0\6\s\c\b\l\m\3\9\4\m\m\c\7\1\i\m\8\a\j\p\i\5\k\c\z\6\8\8\m\b\p\v\t\8\h ]] 00:28:40.753 05:11:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:40.754 05:11:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:40.754 [2024-04-27 05:11:10.492977] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:40.754 [2024-04-27 05:11:10.493277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146636 ] 00:28:40.754 [2024-04-27 05:11:10.668565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.013 [2024-04-27 05:11:10.791533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.531  Copying: 512/512 [B] (average 500 kBps) 00:28:41.531 00:28:41.531 05:11:11 -- dd/posix.sh@93 -- # [[ qn7yyg3qy2bvcguakkedar6vjvm9pgewcuiev3h882xbv154q6m38ihhajhii0nuweathcw7l0jlbmjaokc8rayd2co0diz2d4v8syfmi9qzorplg6mw0lwnmz7udrdvoqy8z4x1mw4dedxx7td9pt0gjevui1demu9nizeiu5h243ch41ngikw5ozctpht553fxtic2m7htvghqksjyok7smcl0tmkr89imxfyxcoc87sqyt19hp4pn2ec3fufdaasj5owy6yoq80rcj9vpqn1xvfy5xt01eltv668khtl2az9mm6gu7m6tmur106eeu1pizepul7lbncibs4jsujhnr0p47655quw3kvi6zifnt18seaytwjzo4iqf758fk0qeazp5k2ex813td25kahddbvp4e5ukzu5oycqsx98fhhju1vssmkvbig03h9gkunyehewqbmiy7eskkn1yy0o96j3y06scblm394mmc71im8ajpi5kcz688mbpvt8h == \q\n\7\y\y\g\3\q\y\2\b\v\c\g\u\a\k\k\e\d\a\r\6\v\j\v\m\9\p\g\e\w\c\u\i\e\v\3\h\8\8\2\x\b\v\1\5\4\q\6\m\3\8\i\h\h\a\j\h\i\i\0\n\u\w\e\a\t\h\c\w\7\l\0\j\l\b\m\j\a\o\k\c\8\r\a\y\d\2\c\o\0\d\i\z\2\d\4\v\8\s\y\f\m\i\9\q\z\o\r\p\l\g\6\m\w\0\l\w\n\m\z\7\u\d\r\d\v\o\q\y\8\z\4\x\1\m\w\4\d\e\d\x\x\7\t\d\9\p\t\0\g\j\e\v\u\i\1\d\e\m\u\9\n\i\z\e\i\u\5\h\2\4\3\c\h\4\1\n\g\i\k\w\5\o\z\c\t\p\h\t\5\5\3\f\x\t\i\c\2\m\7\h\t\v\g\h\q\k\s\j\y\o\k\7\s\m\c\l\0\t\m\k\r\8\9\i\m\x\f\y\x\c\o\c\8\7\s\q\y\t\1\9\h\p\4\p\n\2\e\c\3\f\u\f\d\a\a\s\j\5\o\w\y\6\y\o\q\8\0\r\c\j\9\v\p\q\n\1\x\v\f\y\5\x\t\0\1\e\l\t\v\6\6\8\k\h\t\l\2\a\z\9\m\m\6\g\u\7\m\6\t\m\u\r\1\0\6\e\e\u\1\p\i\z\e\p\u\l\7\l\b\n\c\i\b\s\4\j\s\u\j\h\n\r\0\p\4\7\6\5\5\q\u\w\3\k\v\i\6\z\i\f\n\t\1\8\s\e\a\y\t\w\j\z\o\4\i\q\f\7\5\8\f\k\0\q\e\a\z\p\5\k\2\e\x\8\1\3\t\d\2\5\k\a\h\d\d\b\v\p\4\e\5\u\k\z\u\5\o\y\c\q\s\x\9\8\f\h\h\j\u\1\v\s\s\m\k\v\b\i\g\0\3\h\9\g\k\u\n\y\e\h\e\w\q\b\m\i\y\7\e\s\k\k\n\1\y\y\0\o\9\6\j\3\y\0\6\s\c\b\l\m\3\9\4\m\m\c\7\1\i\m\8\a\j\p\i\5\k\c\z\6\8\8\m\b\p\v\t\8\h ]] 00:28:41.531 05:11:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:41.531 05:11:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:41.790 [2024-04-27 05:11:11.473647] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:41.790 [2024-04-27 05:11:11.473929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146653 ] 00:28:41.790 [2024-04-27 05:11:11.644923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.048 [2024-04-27 05:11:11.767243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.616  Copying: 512/512 [B] (average 125 kBps) 00:28:42.616 00:28:42.616 05:11:12 -- dd/posix.sh@93 -- # [[ qn7yyg3qy2bvcguakkedar6vjvm9pgewcuiev3h882xbv154q6m38ihhajhii0nuweathcw7l0jlbmjaokc8rayd2co0diz2d4v8syfmi9qzorplg6mw0lwnmz7udrdvoqy8z4x1mw4dedxx7td9pt0gjevui1demu9nizeiu5h243ch41ngikw5ozctpht553fxtic2m7htvghqksjyok7smcl0tmkr89imxfyxcoc87sqyt19hp4pn2ec3fufdaasj5owy6yoq80rcj9vpqn1xvfy5xt01eltv668khtl2az9mm6gu7m6tmur106eeu1pizepul7lbncibs4jsujhnr0p47655quw3kvi6zifnt18seaytwjzo4iqf758fk0qeazp5k2ex813td25kahddbvp4e5ukzu5oycqsx98fhhju1vssmkvbig03h9gkunyehewqbmiy7eskkn1yy0o96j3y06scblm394mmc71im8ajpi5kcz688mbpvt8h == \q\n\7\y\y\g\3\q\y\2\b\v\c\g\u\a\k\k\e\d\a\r\6\v\j\v\m\9\p\g\e\w\c\u\i\e\v\3\h\8\8\2\x\b\v\1\5\4\q\6\m\3\8\i\h\h\a\j\h\i\i\0\n\u\w\e\a\t\h\c\w\7\l\0\j\l\b\m\j\a\o\k\c\8\r\a\y\d\2\c\o\0\d\i\z\2\d\4\v\8\s\y\f\m\i\9\q\z\o\r\p\l\g\6\m\w\0\l\w\n\m\z\7\u\d\r\d\v\o\q\y\8\z\4\x\1\m\w\4\d\e\d\x\x\7\t\d\9\p\t\0\g\j\e\v\u\i\1\d\e\m\u\9\n\i\z\e\i\u\5\h\2\4\3\c\h\4\1\n\g\i\k\w\5\o\z\c\t\p\h\t\5\5\3\f\x\t\i\c\2\m\7\h\t\v\g\h\q\k\s\j\y\o\k\7\s\m\c\l\0\t\m\k\r\8\9\i\m\x\f\y\x\c\o\c\8\7\s\q\y\t\1\9\h\p\4\p\n\2\e\c\3\f\u\f\d\a\a\s\j\5\o\w\y\6\y\o\q\8\0\r\c\j\9\v\p\q\n\1\x\v\f\y\5\x\t\0\1\e\l\t\v\6\6\8\k\h\t\l\2\a\z\9\m\m\6\g\u\7\m\6\t\m\u\r\1\0\6\e\e\u\1\p\i\z\e\p\u\l\7\l\b\n\c\i\b\s\4\j\s\u\j\h\n\r\0\p\4\7\6\5\5\q\u\w\3\k\v\i\6\z\i\f\n\t\1\8\s\e\a\y\t\w\j\z\o\4\i\q\f\7\5\8\f\k\0\q\e\a\z\p\5\k\2\e\x\8\1\3\t\d\2\5\k\a\h\d\d\b\v\p\4\e\5\u\k\z\u\5\o\y\c\q\s\x\9\8\f\h\h\j\u\1\v\s\s\m\k\v\b\i\g\0\3\h\9\g\k\u\n\y\e\h\e\w\q\b\m\i\y\7\e\s\k\k\n\1\y\y\0\o\9\6\j\3\y\0\6\s\c\b\l\m\3\9\4\m\m\c\7\1\i\m\8\a\j\p\i\5\k\c\z\6\8\8\m\b\p\v\t\8\h ]] 00:28:42.616 05:11:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:42.616 05:11:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:42.616 [2024-04-27 05:11:12.438015] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:42.616 [2024-04-27 05:11:12.438257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146670 ] 00:28:42.876 [2024-04-27 05:11:12.596937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.876 [2024-04-27 05:11:12.717095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.701  Copying: 512/512 [B] (average 166 kBps) 00:28:43.701 00:28:43.701 05:11:13 -- dd/posix.sh@93 -- # [[ qn7yyg3qy2bvcguakkedar6vjvm9pgewcuiev3h882xbv154q6m38ihhajhii0nuweathcw7l0jlbmjaokc8rayd2co0diz2d4v8syfmi9qzorplg6mw0lwnmz7udrdvoqy8z4x1mw4dedxx7td9pt0gjevui1demu9nizeiu5h243ch41ngikw5ozctpht553fxtic2m7htvghqksjyok7smcl0tmkr89imxfyxcoc87sqyt19hp4pn2ec3fufdaasj5owy6yoq80rcj9vpqn1xvfy5xt01eltv668khtl2az9mm6gu7m6tmur106eeu1pizepul7lbncibs4jsujhnr0p47655quw3kvi6zifnt18seaytwjzo4iqf758fk0qeazp5k2ex813td25kahddbvp4e5ukzu5oycqsx98fhhju1vssmkvbig03h9gkunyehewqbmiy7eskkn1yy0o96j3y06scblm394mmc71im8ajpi5kcz688mbpvt8h == \q\n\7\y\y\g\3\q\y\2\b\v\c\g\u\a\k\k\e\d\a\r\6\v\j\v\m\9\p\g\e\w\c\u\i\e\v\3\h\8\8\2\x\b\v\1\5\4\q\6\m\3\8\i\h\h\a\j\h\i\i\0\n\u\w\e\a\t\h\c\w\7\l\0\j\l\b\m\j\a\o\k\c\8\r\a\y\d\2\c\o\0\d\i\z\2\d\4\v\8\s\y\f\m\i\9\q\z\o\r\p\l\g\6\m\w\0\l\w\n\m\z\7\u\d\r\d\v\o\q\y\8\z\4\x\1\m\w\4\d\e\d\x\x\7\t\d\9\p\t\0\g\j\e\v\u\i\1\d\e\m\u\9\n\i\z\e\i\u\5\h\2\4\3\c\h\4\1\n\g\i\k\w\5\o\z\c\t\p\h\t\5\5\3\f\x\t\i\c\2\m\7\h\t\v\g\h\q\k\s\j\y\o\k\7\s\m\c\l\0\t\m\k\r\8\9\i\m\x\f\y\x\c\o\c\8\7\s\q\y\t\1\9\h\p\4\p\n\2\e\c\3\f\u\f\d\a\a\s\j\5\o\w\y\6\y\o\q\8\0\r\c\j\9\v\p\q\n\1\x\v\f\y\5\x\t\0\1\e\l\t\v\6\6\8\k\h\t\l\2\a\z\9\m\m\6\g\u\7\m\6\t\m\u\r\1\0\6\e\e\u\1\p\i\z\e\p\u\l\7\l\b\n\c\i\b\s\4\j\s\u\j\h\n\r\0\p\4\7\6\5\5\q\u\w\3\k\v\i\6\z\i\f\n\t\1\8\s\e\a\y\t\w\j\z\o\4\i\q\f\7\5\8\f\k\0\q\e\a\z\p\5\k\2\e\x\8\1\3\t\d\2\5\k\a\h\d\d\b\v\p\4\e\5\u\k\z\u\5\o\y\c\q\s\x\9\8\f\h\h\j\u\1\v\s\s\m\k\v\b\i\g\0\3\h\9\g\k\u\n\y\e\h\e\w\q\b\m\i\y\7\e\s\k\k\n\1\y\y\0\o\9\6\j\3\y\0\6\s\c\b\l\m\3\9\4\m\m\c\7\1\i\m\8\a\j\p\i\5\k\c\z\6\8\8\m\b\p\v\t\8\h ]] 00:28:43.701 ************************************ 00:28:43.701 00:28:43.701 real 0m7.681s 00:28:43.701 user 0m4.387s 00:28:43.701 sys 0m2.175s 00:28:43.701 05:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.701 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:43.701 END TEST dd_flags_misc 00:28:43.701 ************************************ 00:28:43.701 05:11:13 -- dd/posix.sh@131 -- # tests_forced_aio 00:28:43.701 05:11:13 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:28:43.701 * Second test run, using AIO 00:28:43.701 05:11:13 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:28:43.701 05:11:13 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:28:43.701 05:11:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:43.701 05:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.701 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:43.701 ************************************ 00:28:43.701 START TEST dd_flag_append_forced_aio 00:28:43.701 ************************************ 00:28:43.701 05:11:13 -- common/autotest_common.sh@1104 -- # append 00:28:43.701 05:11:13 -- dd/posix.sh@16 -- # local dump0 00:28:43.701 05:11:13 -- dd/posix.sh@17 -- # local dump1 00:28:43.701 05:11:13 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:43.701 05:11:13 -- dd/common.sh@98 -- # xtrace_disable 00:28:43.701 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:43.701 05:11:13 -- dd/posix.sh@19 -- # dump0=venfd8h2y43djgcfn96tpwtxmhruzpw7 00:28:43.701 05:11:13 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:43.701 05:11:13 -- dd/common.sh@98 -- # xtrace_disable 00:28:43.701 05:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:43.701 05:11:13 -- dd/posix.sh@20 -- # dump1=dotgrn6y89zpaqr49clmm9ryvqlq1mdb 00:28:43.701 05:11:13 -- dd/posix.sh@22 -- # printf %s venfd8h2y43djgcfn96tpwtxmhruzpw7 00:28:43.701 05:11:13 -- dd/posix.sh@23 -- # printf %s dotgrn6y89zpaqr49clmm9ryvqlq1mdb 00:28:43.701 05:11:13 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:43.701 [2024-04-27 05:11:13.509167] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:43.701 [2024-04-27 05:11:13.509461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146709 ] 00:28:43.960 [2024-04-27 05:11:13.680434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.960 [2024-04-27 05:11:13.804186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.784  Copying: 32/32 [B] (average 31 kBps) 00:28:44.784 00:28:44.785 05:11:14 -- dd/posix.sh@27 -- # [[ dotgrn6y89zpaqr49clmm9ryvqlq1mdbvenfd8h2y43djgcfn96tpwtxmhruzpw7 == \d\o\t\g\r\n\6\y\8\9\z\p\a\q\r\4\9\c\l\m\m\9\r\y\v\q\l\q\1\m\d\b\v\e\n\f\d\8\h\2\y\4\3\d\j\g\c\f\n\9\6\t\p\w\t\x\m\h\r\u\z\p\w\7 ]] 00:28:44.785 00:28:44.785 real 0m1.023s 00:28:44.785 user 0m0.582s 00:28:44.785 sys 0m0.311s 00:28:44.785 ************************************ 00:28:44.785 END TEST dd_flag_append_forced_aio 00:28:44.785 ************************************ 00:28:44.785 05:11:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.785 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 05:11:14 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:28:44.785 05:11:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:44.785 05:11:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:44.785 05:11:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.785 ************************************ 00:28:44.785 START TEST dd_flag_directory_forced_aio 00:28:44.785 ************************************ 00:28:44.785 05:11:14 -- common/autotest_common.sh@1104 -- # directory 00:28:44.785 05:11:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:44.785 05:11:14 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.785 05:11:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:44.785 05:11:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.785 05:11:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.785 05:11:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.785 05:11:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.785 05:11:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.785 05:11:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.785 05:11:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.785 05:11:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:44.785 05:11:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:44.785 [2024-04-27 05:11:14.588971] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:44.785 [2024-04-27 05:11:14.589239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146744 ] 00:28:45.043 [2024-04-27 05:11:14.762163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.043 [2024-04-27 05:11:14.897085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.302 [2024-04-27 05:11:15.058236] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:45.302 [2024-04-27 05:11:15.058391] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:45.302 [2024-04-27 05:11:15.058456] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.561 [2024-04-27 05:11:15.320522] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:45.561 05:11:15 -- common/autotest_common.sh@643 -- # es=236 00:28:45.561 05:11:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:45.561 05:11:15 -- common/autotest_common.sh@652 -- # es=108 00:28:45.561 05:11:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:45.561 05:11:15 -- common/autotest_common.sh@660 -- # es=1 00:28:45.561 05:11:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:45.561 05:11:15 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:45.561 05:11:15 -- common/autotest_common.sh@640 -- # local es=0 00:28:45.821 05:11:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:45.821 05:11:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 05:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 05:11:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 05:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 05:11:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 05:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.821 05:11:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:45.821 05:11:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:45.821 05:11:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:45.821 [2024-04-27 05:11:15.532906] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:45.821 [2024-04-27 05:11:15.533179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146765 ] 00:28:45.821 [2024-04-27 05:11:15.688514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.081 [2024-04-27 05:11:15.815869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.081 [2024-04-27 05:11:15.966063] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:46.081 [2024-04-27 05:11:15.966166] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:46.081 [2024-04-27 05:11:15.966221] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:46.339 [2024-04-27 05:11:16.216110] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:46.599 05:11:16 -- common/autotest_common.sh@643 -- # es=236 00:28:46.599 05:11:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:46.599 05:11:16 -- common/autotest_common.sh@652 -- # es=108 00:28:46.599 05:11:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:46.599 05:11:16 -- common/autotest_common.sh@660 -- # es=1 00:28:46.599 05:11:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:46.599 00:28:46.599 real 0m1.858s 00:28:46.599 user 0m1.064s 00:28:46.599 sys 0m0.595s 00:28:46.599 05:11:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.599 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:28:46.599 ************************************ 00:28:46.599 END TEST dd_flag_directory_forced_aio 00:28:46.599 ************************************ 00:28:46.599 05:11:16 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:46.599 05:11:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:46.599 05:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:46.599 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:28:46.599 ************************************ 00:28:46.599 START TEST dd_flag_nofollow_forced_aio 00:28:46.599 ************************************ 00:28:46.599 05:11:16 -- common/autotest_common.sh@1104 -- # nofollow 00:28:46.599 05:11:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:46.599 05:11:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:46.599 05:11:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:46.599 05:11:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:46.599 05:11:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.599 05:11:16 -- common/autotest_common.sh@640 -- # local es=0 00:28:46.599 05:11:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.599 05:11:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.599 05:11:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:46.599 05:11:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.599 05:11:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:46.599 05:11:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.599 05:11:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:46.599 05:11:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.599 05:11:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:46.599 05:11:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.599 [2024-04-27 05:11:16.510992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:46.599 [2024-04-27 05:11:16.511231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146796 ] 00:28:46.858 [2024-04-27 05:11:16.681461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.117 [2024-04-27 05:11:16.797273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.117 [2024-04-27 05:11:16.938441] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:47.117 [2024-04-27 05:11:16.938568] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:47.117 [2024-04-27 05:11:16.938654] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:47.383 [2024-04-27 05:11:17.169667] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:47.648 05:11:17 -- common/autotest_common.sh@643 -- # es=216 00:28:47.648 05:11:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:47.648 05:11:17 -- common/autotest_common.sh@652 -- # es=88 00:28:47.648 05:11:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:47.648 05:11:17 -- common/autotest_common.sh@660 -- # es=1 00:28:47.648 05:11:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:47.648 05:11:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:47.648 05:11:17 -- common/autotest_common.sh@640 -- # local es=0 00:28:47.648 05:11:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:47.648 05:11:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.648 05:11:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:47.648 05:11:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.648 05:11:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:47.649 05:11:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.649 05:11:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:47.649 05:11:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.649 05:11:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.649 05:11:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:47.649 [2024-04-27 05:11:17.403618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:47.649 [2024-04-27 05:11:17.404496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146816 ] 00:28:47.908 [2024-04-27 05:11:17.574815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.908 [2024-04-27 05:11:17.700533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.167 [2024-04-27 05:11:17.851566] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:48.167 [2024-04-27 05:11:17.851710] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:48.167 [2024-04-27 05:11:17.851771] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.425 [2024-04-27 05:11:18.098300] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:48.425 05:11:18 -- common/autotest_common.sh@643 -- # es=216 00:28:48.425 05:11:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:48.425 05:11:18 -- common/autotest_common.sh@652 -- # es=88 00:28:48.425 05:11:18 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:48.425 05:11:18 -- common/autotest_common.sh@660 -- # es=1 00:28:48.425 05:11:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:48.425 05:11:18 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:48.425 05:11:18 -- dd/common.sh@98 -- # xtrace_disable 00:28:48.425 05:11:18 -- common/autotest_common.sh@10 -- # set +x 00:28:48.425 05:11:18 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:48.685 [2024-04-27 05:11:18.381440] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:48.685 [2024-04-27 05:11:18.381863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146827 ] 00:28:48.685 [2024-04-27 05:11:18.555933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.944 [2024-04-27 05:11:18.692301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.511  Copying: 512/512 [B] (average 500 kBps) 00:28:49.511 00:28:49.511 05:11:19 -- dd/posix.sh@49 -- # [[ vupa2fbqk5y6bt9r93t52z8mi2o361u1iopciej1jcea0hci68yyui84a6b6jg95z80s1vgv4tu2p8g8ql7b6oiy04g43oxm568ynou2mmjiyr9i60ppcvuvwbift8motaxr10yagqiwuiets5741tzaquxp7qvc0a2uix78ijssfh3ob615ah8cadgdab6ujhm5bncmasykil7fd3nzwg3b5td5ytfyxqfh9lqdhk0ptn3n45ybsoi8v6kfzyw9vmdowizddxmmw366xjqdj8c0qa2fvrteurb31ax6ebiu4kncnzy9306aazozg75fig9yeemdvfayk0nrhe4mr22m34i35dqbm1lxr6jgtf72q8rbo7tkr3bod34ilmwm4gl0z6qf2co1o78jw16108dqpy802v5wjmag94cn45ghk3xs8rifkbnxvj1b1a3etgf3g8dcac95b3op21y6u3trec749au71a89xn86cuaqp2zsby2nwbbv89ho60v9 == \v\u\p\a\2\f\b\q\k\5\y\6\b\t\9\r\9\3\t\5\2\z\8\m\i\2\o\3\6\1\u\1\i\o\p\c\i\e\j\1\j\c\e\a\0\h\c\i\6\8\y\y\u\i\8\4\a\6\b\6\j\g\9\5\z\8\0\s\1\v\g\v\4\t\u\2\p\8\g\8\q\l\7\b\6\o\i\y\0\4\g\4\3\o\x\m\5\6\8\y\n\o\u\2\m\m\j\i\y\r\9\i\6\0\p\p\c\v\u\v\w\b\i\f\t\8\m\o\t\a\x\r\1\0\y\a\g\q\i\w\u\i\e\t\s\5\7\4\1\t\z\a\q\u\x\p\7\q\v\c\0\a\2\u\i\x\7\8\i\j\s\s\f\h\3\o\b\6\1\5\a\h\8\c\a\d\g\d\a\b\6\u\j\h\m\5\b\n\c\m\a\s\y\k\i\l\7\f\d\3\n\z\w\g\3\b\5\t\d\5\y\t\f\y\x\q\f\h\9\l\q\d\h\k\0\p\t\n\3\n\4\5\y\b\s\o\i\8\v\6\k\f\z\y\w\9\v\m\d\o\w\i\z\d\d\x\m\m\w\3\6\6\x\j\q\d\j\8\c\0\q\a\2\f\v\r\t\e\u\r\b\3\1\a\x\6\e\b\i\u\4\k\n\c\n\z\y\9\3\0\6\a\a\z\o\z\g\7\5\f\i\g\9\y\e\e\m\d\v\f\a\y\k\0\n\r\h\e\4\m\r\2\2\m\3\4\i\3\5\d\q\b\m\1\l\x\r\6\j\g\t\f\7\2\q\8\r\b\o\7\t\k\r\3\b\o\d\3\4\i\l\m\w\m\4\g\l\0\z\6\q\f\2\c\o\1\o\7\8\j\w\1\6\1\0\8\d\q\p\y\8\0\2\v\5\w\j\m\a\g\9\4\c\n\4\5\g\h\k\3\x\s\8\r\i\f\k\b\n\x\v\j\1\b\1\a\3\e\t\g\f\3\g\8\d\c\a\c\9\5\b\3\o\p\2\1\y\6\u\3\t\r\e\c\7\4\9\a\u\7\1\a\8\9\x\n\8\6\c\u\a\q\p\2\z\s\b\y\2\n\w\b\b\v\8\9\h\o\6\0\v\9 ]] 00:28:49.511 00:28:49.511 real 0m2.905s 00:28:49.511 user 0m1.639s 00:28:49.511 sys 0m0.937s 00:28:49.511 05:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.511 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 ************************************ 00:28:49.511 END TEST dd_flag_nofollow_forced_aio 00:28:49.511 ************************************ 00:28:49.511 05:11:19 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:49.511 05:11:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:49.511 05:11:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.511 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 ************************************ 00:28:49.511 START TEST dd_flag_noatime_forced_aio 00:28:49.511 ************************************ 00:28:49.512 05:11:19 -- common/autotest_common.sh@1104 -- # noatime 00:28:49.512 05:11:19 -- dd/posix.sh@53 -- # local atime_if 00:28:49.512 05:11:19 -- dd/posix.sh@54 -- # local atime_of 00:28:49.512 05:11:19 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:49.512 05:11:19 -- dd/common.sh@98 -- # xtrace_disable 00:28:49.512 05:11:19 -- common/autotest_common.sh@10 -- # set +x 00:28:49.512 05:11:19 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:49.512 05:11:19 -- dd/posix.sh@60 -- # atime_if=1714194678 00:28:49.512 05:11:19 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:49.512 05:11:19 -- dd/posix.sh@61 -- # atime_of=1714194679 00:28:49.512 05:11:19 -- dd/posix.sh@66 -- # sleep 1 00:28:50.890 05:11:20 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:50.891 [2024-04-27 05:11:20.481829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:50.891 [2024-04-27 05:11:20.482122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146886 ] 00:28:50.891 [2024-04-27 05:11:20.656786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.891 [2024-04-27 05:11:20.784437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.717  Copying: 512/512 [B] (average 500 kBps) 00:28:51.717 00:28:51.717 05:11:21 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:51.717 05:11:21 -- dd/posix.sh@69 -- # (( atime_if == 1714194678 )) 00:28:51.717 05:11:21 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.717 05:11:21 -- dd/posix.sh@70 -- # (( atime_of == 1714194679 )) 00:28:51.717 05:11:21 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:51.717 [2024-04-27 05:11:21.482631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:51.717 [2024-04-27 05:11:21.482925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146905 ] 00:28:51.977 [2024-04-27 05:11:21.654468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.977 [2024-04-27 05:11:21.755560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.521  Copying: 512/512 [B] (average 500 kBps) 00:28:52.521 00:28:52.521 05:11:22 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:52.521 05:11:22 -- dd/posix.sh@73 -- # (( atime_if < 1714194681 )) 00:28:52.521 00:28:52.521 real 0m2.983s 00:28:52.521 user 0m1.109s 00:28:52.521 sys 0m0.609s 00:28:52.521 05:11:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.521 05:11:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.521 ************************************ 00:28:52.521 END TEST dd_flag_noatime_forced_aio 00:28:52.521 ************************************ 00:28:52.521 05:11:22 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:52.521 05:11:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:52.521 05:11:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:52.521 05:11:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.781 ************************************ 00:28:52.781 START TEST dd_flags_misc_forced_aio 00:28:52.781 ************************************ 00:28:52.781 05:11:22 -- common/autotest_common.sh@1104 -- # io 00:28:52.781 05:11:22 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:52.781 05:11:22 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:52.781 05:11:22 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:52.781 05:11:22 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:52.781 05:11:22 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:52.781 05:11:22 -- dd/common.sh@98 -- # xtrace_disable 00:28:52.781 05:11:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.781 05:11:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:52.781 05:11:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:52.781 [2024-04-27 05:11:22.510497] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:52.781 [2024-04-27 05:11:22.510778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146942 ] 00:28:52.781 [2024-04-27 05:11:22.681551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.039 [2024-04-27 05:11:22.790206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.603  Copying: 512/512 [B] (average 500 kBps) 00:28:53.603 00:28:53.604 05:11:23 -- dd/posix.sh@93 -- # [[ sn08mu7y0cjk5s7w5q85w934j1ruion5106e4i1mu30lyjb8n6twzod93ymqk2q8joru1w4gfybazh1q4shve5lb96kg6igikyojkfpnfrakyjbw6hhvlmyj4agxbsv8w9gbpfrt7ht0vfjdo4kxnnkbj2dd1xwtvrbeg5nibzao6teuxpzbkkpk7ltvtqct7wwx2udm95n82m7kx0k1gicq53044ql89t8q3dhgsi1e98ketwhrcd4la2pn05gt9rk9a54fiik298n4rn5g7re16ns39j8ri6mal2ltqfou63y5agbxkfm6nq983487hgkxej8cb8z3ujvdrf2pvxpcvs4s61maqdfey6ay4rhnjqgwucbx4sle89wuvqv183kwvan6awtgw9y3lpkfs74u55c7ozwgxcl6f2bjhqxf7cf1pl504q63ni5lopchafjvpdeauc7xc62954gng4uws3neogdzi2upbac432cwpgzz9399aj5oirt6q2r8 == \s\n\0\8\m\u\7\y\0\c\j\k\5\s\7\w\5\q\8\5\w\9\3\4\j\1\r\u\i\o\n\5\1\0\6\e\4\i\1\m\u\3\0\l\y\j\b\8\n\6\t\w\z\o\d\9\3\y\m\q\k\2\q\8\j\o\r\u\1\w\4\g\f\y\b\a\z\h\1\q\4\s\h\v\e\5\l\b\9\6\k\g\6\i\g\i\k\y\o\j\k\f\p\n\f\r\a\k\y\j\b\w\6\h\h\v\l\m\y\j\4\a\g\x\b\s\v\8\w\9\g\b\p\f\r\t\7\h\t\0\v\f\j\d\o\4\k\x\n\n\k\b\j\2\d\d\1\x\w\t\v\r\b\e\g\5\n\i\b\z\a\o\6\t\e\u\x\p\z\b\k\k\p\k\7\l\t\v\t\q\c\t\7\w\w\x\2\u\d\m\9\5\n\8\2\m\7\k\x\0\k\1\g\i\c\q\5\3\0\4\4\q\l\8\9\t\8\q\3\d\h\g\s\i\1\e\9\8\k\e\t\w\h\r\c\d\4\l\a\2\p\n\0\5\g\t\9\r\k\9\a\5\4\f\i\i\k\2\9\8\n\4\r\n\5\g\7\r\e\1\6\n\s\3\9\j\8\r\i\6\m\a\l\2\l\t\q\f\o\u\6\3\y\5\a\g\b\x\k\f\m\6\n\q\9\8\3\4\8\7\h\g\k\x\e\j\8\c\b\8\z\3\u\j\v\d\r\f\2\p\v\x\p\c\v\s\4\s\6\1\m\a\q\d\f\e\y\6\a\y\4\r\h\n\j\q\g\w\u\c\b\x\4\s\l\e\8\9\w\u\v\q\v\1\8\3\k\w\v\a\n\6\a\w\t\g\w\9\y\3\l\p\k\f\s\7\4\u\5\5\c\7\o\z\w\g\x\c\l\6\f\2\b\j\h\q\x\f\7\c\f\1\p\l\5\0\4\q\6\3\n\i\5\l\o\p\c\h\a\f\j\v\p\d\e\a\u\c\7\x\c\6\2\9\5\4\g\n\g\4\u\w\s\3\n\e\o\g\d\z\i\2\u\p\b\a\c\4\3\2\c\w\p\g\z\z\9\3\9\9\a\j\5\o\i\r\t\6\q\2\r\8 ]] 00:28:53.604 05:11:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:53.604 05:11:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:53.604 [2024-04-27 05:11:23.519029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:53.604 [2024-04-27 05:11:23.519985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146957 ] 00:28:53.861 [2024-04-27 05:11:23.690295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.120 [2024-04-27 05:11:23.819636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.687  Copying: 512/512 [B] (average 500 kBps) 00:28:54.687 00:28:54.687 05:11:24 -- dd/posix.sh@93 -- # [[ sn08mu7y0cjk5s7w5q85w934j1ruion5106e4i1mu30lyjb8n6twzod93ymqk2q8joru1w4gfybazh1q4shve5lb96kg6igikyojkfpnfrakyjbw6hhvlmyj4agxbsv8w9gbpfrt7ht0vfjdo4kxnnkbj2dd1xwtvrbeg5nibzao6teuxpzbkkpk7ltvtqct7wwx2udm95n82m7kx0k1gicq53044ql89t8q3dhgsi1e98ketwhrcd4la2pn05gt9rk9a54fiik298n4rn5g7re16ns39j8ri6mal2ltqfou63y5agbxkfm6nq983487hgkxej8cb8z3ujvdrf2pvxpcvs4s61maqdfey6ay4rhnjqgwucbx4sle89wuvqv183kwvan6awtgw9y3lpkfs74u55c7ozwgxcl6f2bjhqxf7cf1pl504q63ni5lopchafjvpdeauc7xc62954gng4uws3neogdzi2upbac432cwpgzz9399aj5oirt6q2r8 == \s\n\0\8\m\u\7\y\0\c\j\k\5\s\7\w\5\q\8\5\w\9\3\4\j\1\r\u\i\o\n\5\1\0\6\e\4\i\1\m\u\3\0\l\y\j\b\8\n\6\t\w\z\o\d\9\3\y\m\q\k\2\q\8\j\o\r\u\1\w\4\g\f\y\b\a\z\h\1\q\4\s\h\v\e\5\l\b\9\6\k\g\6\i\g\i\k\y\o\j\k\f\p\n\f\r\a\k\y\j\b\w\6\h\h\v\l\m\y\j\4\a\g\x\b\s\v\8\w\9\g\b\p\f\r\t\7\h\t\0\v\f\j\d\o\4\k\x\n\n\k\b\j\2\d\d\1\x\w\t\v\r\b\e\g\5\n\i\b\z\a\o\6\t\e\u\x\p\z\b\k\k\p\k\7\l\t\v\t\q\c\t\7\w\w\x\2\u\d\m\9\5\n\8\2\m\7\k\x\0\k\1\g\i\c\q\5\3\0\4\4\q\l\8\9\t\8\q\3\d\h\g\s\i\1\e\9\8\k\e\t\w\h\r\c\d\4\l\a\2\p\n\0\5\g\t\9\r\k\9\a\5\4\f\i\i\k\2\9\8\n\4\r\n\5\g\7\r\e\1\6\n\s\3\9\j\8\r\i\6\m\a\l\2\l\t\q\f\o\u\6\3\y\5\a\g\b\x\k\f\m\6\n\q\9\8\3\4\8\7\h\g\k\x\e\j\8\c\b\8\z\3\u\j\v\d\r\f\2\p\v\x\p\c\v\s\4\s\6\1\m\a\q\d\f\e\y\6\a\y\4\r\h\n\j\q\g\w\u\c\b\x\4\s\l\e\8\9\w\u\v\q\v\1\8\3\k\w\v\a\n\6\a\w\t\g\w\9\y\3\l\p\k\f\s\7\4\u\5\5\c\7\o\z\w\g\x\c\l\6\f\2\b\j\h\q\x\f\7\c\f\1\p\l\5\0\4\q\6\3\n\i\5\l\o\p\c\h\a\f\j\v\p\d\e\a\u\c\7\x\c\6\2\9\5\4\g\n\g\4\u\w\s\3\n\e\o\g\d\z\i\2\u\p\b\a\c\4\3\2\c\w\p\g\z\z\9\3\9\9\a\j\5\o\i\r\t\6\q\2\r\8 ]] 00:28:54.687 05:11:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:54.687 05:11:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:54.687 [2024-04-27 05:11:24.525445] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:54.687 [2024-04-27 05:11:24.525741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146974 ] 00:28:54.946 [2024-04-27 05:11:24.697304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.946 [2024-04-27 05:11:24.795150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.770  Copying: 512/512 [B] (average 166 kBps) 00:28:55.770 00:28:55.770 05:11:25 -- dd/posix.sh@93 -- # [[ sn08mu7y0cjk5s7w5q85w934j1ruion5106e4i1mu30lyjb8n6twzod93ymqk2q8joru1w4gfybazh1q4shve5lb96kg6igikyojkfpnfrakyjbw6hhvlmyj4agxbsv8w9gbpfrt7ht0vfjdo4kxnnkbj2dd1xwtvrbeg5nibzao6teuxpzbkkpk7ltvtqct7wwx2udm95n82m7kx0k1gicq53044ql89t8q3dhgsi1e98ketwhrcd4la2pn05gt9rk9a54fiik298n4rn5g7re16ns39j8ri6mal2ltqfou63y5agbxkfm6nq983487hgkxej8cb8z3ujvdrf2pvxpcvs4s61maqdfey6ay4rhnjqgwucbx4sle89wuvqv183kwvan6awtgw9y3lpkfs74u55c7ozwgxcl6f2bjhqxf7cf1pl504q63ni5lopchafjvpdeauc7xc62954gng4uws3neogdzi2upbac432cwpgzz9399aj5oirt6q2r8 == \s\n\0\8\m\u\7\y\0\c\j\k\5\s\7\w\5\q\8\5\w\9\3\4\j\1\r\u\i\o\n\5\1\0\6\e\4\i\1\m\u\3\0\l\y\j\b\8\n\6\t\w\z\o\d\9\3\y\m\q\k\2\q\8\j\o\r\u\1\w\4\g\f\y\b\a\z\h\1\q\4\s\h\v\e\5\l\b\9\6\k\g\6\i\g\i\k\y\o\j\k\f\p\n\f\r\a\k\y\j\b\w\6\h\h\v\l\m\y\j\4\a\g\x\b\s\v\8\w\9\g\b\p\f\r\t\7\h\t\0\v\f\j\d\o\4\k\x\n\n\k\b\j\2\d\d\1\x\w\t\v\r\b\e\g\5\n\i\b\z\a\o\6\t\e\u\x\p\z\b\k\k\p\k\7\l\t\v\t\q\c\t\7\w\w\x\2\u\d\m\9\5\n\8\2\m\7\k\x\0\k\1\g\i\c\q\5\3\0\4\4\q\l\8\9\t\8\q\3\d\h\g\s\i\1\e\9\8\k\e\t\w\h\r\c\d\4\l\a\2\p\n\0\5\g\t\9\r\k\9\a\5\4\f\i\i\k\2\9\8\n\4\r\n\5\g\7\r\e\1\6\n\s\3\9\j\8\r\i\6\m\a\l\2\l\t\q\f\o\u\6\3\y\5\a\g\b\x\k\f\m\6\n\q\9\8\3\4\8\7\h\g\k\x\e\j\8\c\b\8\z\3\u\j\v\d\r\f\2\p\v\x\p\c\v\s\4\s\6\1\m\a\q\d\f\e\y\6\a\y\4\r\h\n\j\q\g\w\u\c\b\x\4\s\l\e\8\9\w\u\v\q\v\1\8\3\k\w\v\a\n\6\a\w\t\g\w\9\y\3\l\p\k\f\s\7\4\u\5\5\c\7\o\z\w\g\x\c\l\6\f\2\b\j\h\q\x\f\7\c\f\1\p\l\5\0\4\q\6\3\n\i\5\l\o\p\c\h\a\f\j\v\p\d\e\a\u\c\7\x\c\6\2\9\5\4\g\n\g\4\u\w\s\3\n\e\o\g\d\z\i\2\u\p\b\a\c\4\3\2\c\w\p\g\z\z\9\3\9\9\a\j\5\o\i\r\t\6\q\2\r\8 ]] 00:28:55.770 05:11:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:55.770 05:11:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:55.770 [2024-04-27 05:11:25.487130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:55.770 [2024-04-27 05:11:25.487443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146991 ] 00:28:55.770 [2024-04-27 05:11:25.658451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.028 [2024-04-27 05:11:25.795005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.595  Copying: 512/512 [B] (average 125 kBps) 00:28:56.595 00:28:56.596 05:11:26 -- dd/posix.sh@93 -- # [[ sn08mu7y0cjk5s7w5q85w934j1ruion5106e4i1mu30lyjb8n6twzod93ymqk2q8joru1w4gfybazh1q4shve5lb96kg6igikyojkfpnfrakyjbw6hhvlmyj4agxbsv8w9gbpfrt7ht0vfjdo4kxnnkbj2dd1xwtvrbeg5nibzao6teuxpzbkkpk7ltvtqct7wwx2udm95n82m7kx0k1gicq53044ql89t8q3dhgsi1e98ketwhrcd4la2pn05gt9rk9a54fiik298n4rn5g7re16ns39j8ri6mal2ltqfou63y5agbxkfm6nq983487hgkxej8cb8z3ujvdrf2pvxpcvs4s61maqdfey6ay4rhnjqgwucbx4sle89wuvqv183kwvan6awtgw9y3lpkfs74u55c7ozwgxcl6f2bjhqxf7cf1pl504q63ni5lopchafjvpdeauc7xc62954gng4uws3neogdzi2upbac432cwpgzz9399aj5oirt6q2r8 == \s\n\0\8\m\u\7\y\0\c\j\k\5\s\7\w\5\q\8\5\w\9\3\4\j\1\r\u\i\o\n\5\1\0\6\e\4\i\1\m\u\3\0\l\y\j\b\8\n\6\t\w\z\o\d\9\3\y\m\q\k\2\q\8\j\o\r\u\1\w\4\g\f\y\b\a\z\h\1\q\4\s\h\v\e\5\l\b\9\6\k\g\6\i\g\i\k\y\o\j\k\f\p\n\f\r\a\k\y\j\b\w\6\h\h\v\l\m\y\j\4\a\g\x\b\s\v\8\w\9\g\b\p\f\r\t\7\h\t\0\v\f\j\d\o\4\k\x\n\n\k\b\j\2\d\d\1\x\w\t\v\r\b\e\g\5\n\i\b\z\a\o\6\t\e\u\x\p\z\b\k\k\p\k\7\l\t\v\t\q\c\t\7\w\w\x\2\u\d\m\9\5\n\8\2\m\7\k\x\0\k\1\g\i\c\q\5\3\0\4\4\q\l\8\9\t\8\q\3\d\h\g\s\i\1\e\9\8\k\e\t\w\h\r\c\d\4\l\a\2\p\n\0\5\g\t\9\r\k\9\a\5\4\f\i\i\k\2\9\8\n\4\r\n\5\g\7\r\e\1\6\n\s\3\9\j\8\r\i\6\m\a\l\2\l\t\q\f\o\u\6\3\y\5\a\g\b\x\k\f\m\6\n\q\9\8\3\4\8\7\h\g\k\x\e\j\8\c\b\8\z\3\u\j\v\d\r\f\2\p\v\x\p\c\v\s\4\s\6\1\m\a\q\d\f\e\y\6\a\y\4\r\h\n\j\q\g\w\u\c\b\x\4\s\l\e\8\9\w\u\v\q\v\1\8\3\k\w\v\a\n\6\a\w\t\g\w\9\y\3\l\p\k\f\s\7\4\u\5\5\c\7\o\z\w\g\x\c\l\6\f\2\b\j\h\q\x\f\7\c\f\1\p\l\5\0\4\q\6\3\n\i\5\l\o\p\c\h\a\f\j\v\p\d\e\a\u\c\7\x\c\6\2\9\5\4\g\n\g\4\u\w\s\3\n\e\o\g\d\z\i\2\u\p\b\a\c\4\3\2\c\w\p\g\z\z\9\3\9\9\a\j\5\o\i\r\t\6\q\2\r\8 ]] 00:28:56.596 05:11:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:56.596 05:11:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:56.596 05:11:26 -- dd/common.sh@98 -- # xtrace_disable 00:28:56.596 05:11:26 -- common/autotest_common.sh@10 -- # set +x 00:28:56.596 05:11:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:56.596 05:11:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:56.596 [2024-04-27 05:11:26.412286] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:56.596 [2024-04-27 05:11:26.412615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147007 ] 00:28:56.855 [2024-04-27 05:11:26.582462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.855 [2024-04-27 05:11:26.704723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.372  Copying: 512/512 [B] (average 500 kBps) 00:28:57.372 00:28:57.643 05:11:27 -- dd/posix.sh@93 -- # [[ plqyj6wkle5iwmvobdk48oqbv0w50kkcr0z0dg3mbgtwyj0ig841oqc7nxqlub7xh6s4v46pgfb1ad3zgtqlbva46zm8mgvagetu1jurhji1cla2cu0kxoeu4nsio97eq0ha0zkac832k9ddhd2cqwzumr8z89x60xelgzzq0s6d4l1t6zcve5egdtbhplsc082domfqdwmpu2nn42zrpxqa1u9q32ilrqnm6xe0nyyih0q3zm4l41h3tfwtk6tizhd7nv3ciw7bdg51es0cn0e8fr5tln6hohm2vhylu5giba6kx40bc9g1idpv43jiwbsdxq392lj6vhcnmt0i08f1wit0ncl2hh7bi3prhx4foitx55o43d4w6fzylm7gy5e0xxtb94jn2q2t85di40247d3g4dxyt3rq7owpzl0hm2cdmrordv6yw8vnahln7aaihnmsgf8wlfouu2bceqtowsfybrtdh68gr8t9ldawdej98qamouy9q98wwmr1 == \p\l\q\y\j\6\w\k\l\e\5\i\w\m\v\o\b\d\k\4\8\o\q\b\v\0\w\5\0\k\k\c\r\0\z\0\d\g\3\m\b\g\t\w\y\j\0\i\g\8\4\1\o\q\c\7\n\x\q\l\u\b\7\x\h\6\s\4\v\4\6\p\g\f\b\1\a\d\3\z\g\t\q\l\b\v\a\4\6\z\m\8\m\g\v\a\g\e\t\u\1\j\u\r\h\j\i\1\c\l\a\2\c\u\0\k\x\o\e\u\4\n\s\i\o\9\7\e\q\0\h\a\0\z\k\a\c\8\3\2\k\9\d\d\h\d\2\c\q\w\z\u\m\r\8\z\8\9\x\6\0\x\e\l\g\z\z\q\0\s\6\d\4\l\1\t\6\z\c\v\e\5\e\g\d\t\b\h\p\l\s\c\0\8\2\d\o\m\f\q\d\w\m\p\u\2\n\n\4\2\z\r\p\x\q\a\1\u\9\q\3\2\i\l\r\q\n\m\6\x\e\0\n\y\y\i\h\0\q\3\z\m\4\l\4\1\h\3\t\f\w\t\k\6\t\i\z\h\d\7\n\v\3\c\i\w\7\b\d\g\5\1\e\s\0\c\n\0\e\8\f\r\5\t\l\n\6\h\o\h\m\2\v\h\y\l\u\5\g\i\b\a\6\k\x\4\0\b\c\9\g\1\i\d\p\v\4\3\j\i\w\b\s\d\x\q\3\9\2\l\j\6\v\h\c\n\m\t\0\i\0\8\f\1\w\i\t\0\n\c\l\2\h\h\7\b\i\3\p\r\h\x\4\f\o\i\t\x\5\5\o\4\3\d\4\w\6\f\z\y\l\m\7\g\y\5\e\0\x\x\t\b\9\4\j\n\2\q\2\t\8\5\d\i\4\0\2\4\7\d\3\g\4\d\x\y\t\3\r\q\7\o\w\p\z\l\0\h\m\2\c\d\m\r\o\r\d\v\6\y\w\8\v\n\a\h\l\n\7\a\a\i\h\n\m\s\g\f\8\w\l\f\o\u\u\2\b\c\e\q\t\o\w\s\f\y\b\r\t\d\h\6\8\g\r\8\t\9\l\d\a\w\d\e\j\9\8\q\a\m\o\u\y\9\q\9\8\w\w\m\r\1 ]] 00:28:57.643 05:11:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:57.643 05:11:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:57.643 [2024-04-27 05:11:27.366024] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:57.643 [2024-04-27 05:11:27.366889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147020 ] 00:28:57.643 [2024-04-27 05:11:27.536040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.926 [2024-04-27 05:11:27.652310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.494  Copying: 512/512 [B] (average 500 kBps) 00:28:58.494 00:28:58.494 05:11:28 -- dd/posix.sh@93 -- # [[ plqyj6wkle5iwmvobdk48oqbv0w50kkcr0z0dg3mbgtwyj0ig841oqc7nxqlub7xh6s4v46pgfb1ad3zgtqlbva46zm8mgvagetu1jurhji1cla2cu0kxoeu4nsio97eq0ha0zkac832k9ddhd2cqwzumr8z89x60xelgzzq0s6d4l1t6zcve5egdtbhplsc082domfqdwmpu2nn42zrpxqa1u9q32ilrqnm6xe0nyyih0q3zm4l41h3tfwtk6tizhd7nv3ciw7bdg51es0cn0e8fr5tln6hohm2vhylu5giba6kx40bc9g1idpv43jiwbsdxq392lj6vhcnmt0i08f1wit0ncl2hh7bi3prhx4foitx55o43d4w6fzylm7gy5e0xxtb94jn2q2t85di40247d3g4dxyt3rq7owpzl0hm2cdmrordv6yw8vnahln7aaihnmsgf8wlfouu2bceqtowsfybrtdh68gr8t9ldawdej98qamouy9q98wwmr1 == \p\l\q\y\j\6\w\k\l\e\5\i\w\m\v\o\b\d\k\4\8\o\q\b\v\0\w\5\0\k\k\c\r\0\z\0\d\g\3\m\b\g\t\w\y\j\0\i\g\8\4\1\o\q\c\7\n\x\q\l\u\b\7\x\h\6\s\4\v\4\6\p\g\f\b\1\a\d\3\z\g\t\q\l\b\v\a\4\6\z\m\8\m\g\v\a\g\e\t\u\1\j\u\r\h\j\i\1\c\l\a\2\c\u\0\k\x\o\e\u\4\n\s\i\o\9\7\e\q\0\h\a\0\z\k\a\c\8\3\2\k\9\d\d\h\d\2\c\q\w\z\u\m\r\8\z\8\9\x\6\0\x\e\l\g\z\z\q\0\s\6\d\4\l\1\t\6\z\c\v\e\5\e\g\d\t\b\h\p\l\s\c\0\8\2\d\o\m\f\q\d\w\m\p\u\2\n\n\4\2\z\r\p\x\q\a\1\u\9\q\3\2\i\l\r\q\n\m\6\x\e\0\n\y\y\i\h\0\q\3\z\m\4\l\4\1\h\3\t\f\w\t\k\6\t\i\z\h\d\7\n\v\3\c\i\w\7\b\d\g\5\1\e\s\0\c\n\0\e\8\f\r\5\t\l\n\6\h\o\h\m\2\v\h\y\l\u\5\g\i\b\a\6\k\x\4\0\b\c\9\g\1\i\d\p\v\4\3\j\i\w\b\s\d\x\q\3\9\2\l\j\6\v\h\c\n\m\t\0\i\0\8\f\1\w\i\t\0\n\c\l\2\h\h\7\b\i\3\p\r\h\x\4\f\o\i\t\x\5\5\o\4\3\d\4\w\6\f\z\y\l\m\7\g\y\5\e\0\x\x\t\b\9\4\j\n\2\q\2\t\8\5\d\i\4\0\2\4\7\d\3\g\4\d\x\y\t\3\r\q\7\o\w\p\z\l\0\h\m\2\c\d\m\r\o\r\d\v\6\y\w\8\v\n\a\h\l\n\7\a\a\i\h\n\m\s\g\f\8\w\l\f\o\u\u\2\b\c\e\q\t\o\w\s\f\y\b\r\t\d\h\6\8\g\r\8\t\9\l\d\a\w\d\e\j\9\8\q\a\m\o\u\y\9\q\9\8\w\w\m\r\1 ]] 00:28:58.494 05:11:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:58.494 05:11:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:58.494 [2024-04-27 05:11:28.327211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:58.494 [2024-04-27 05:11:28.328184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147037 ] 00:28:58.753 [2024-04-27 05:11:28.498452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.753 [2024-04-27 05:11:28.594746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.579  Copying: 512/512 [B] (average 166 kBps) 00:28:59.579 00:28:59.579 05:11:29 -- dd/posix.sh@93 -- # [[ plqyj6wkle5iwmvobdk48oqbv0w50kkcr0z0dg3mbgtwyj0ig841oqc7nxqlub7xh6s4v46pgfb1ad3zgtqlbva46zm8mgvagetu1jurhji1cla2cu0kxoeu4nsio97eq0ha0zkac832k9ddhd2cqwzumr8z89x60xelgzzq0s6d4l1t6zcve5egdtbhplsc082domfqdwmpu2nn42zrpxqa1u9q32ilrqnm6xe0nyyih0q3zm4l41h3tfwtk6tizhd7nv3ciw7bdg51es0cn0e8fr5tln6hohm2vhylu5giba6kx40bc9g1idpv43jiwbsdxq392lj6vhcnmt0i08f1wit0ncl2hh7bi3prhx4foitx55o43d4w6fzylm7gy5e0xxtb94jn2q2t85di40247d3g4dxyt3rq7owpzl0hm2cdmrordv6yw8vnahln7aaihnmsgf8wlfouu2bceqtowsfybrtdh68gr8t9ldawdej98qamouy9q98wwmr1 == \p\l\q\y\j\6\w\k\l\e\5\i\w\m\v\o\b\d\k\4\8\o\q\b\v\0\w\5\0\k\k\c\r\0\z\0\d\g\3\m\b\g\t\w\y\j\0\i\g\8\4\1\o\q\c\7\n\x\q\l\u\b\7\x\h\6\s\4\v\4\6\p\g\f\b\1\a\d\3\z\g\t\q\l\b\v\a\4\6\z\m\8\m\g\v\a\g\e\t\u\1\j\u\r\h\j\i\1\c\l\a\2\c\u\0\k\x\o\e\u\4\n\s\i\o\9\7\e\q\0\h\a\0\z\k\a\c\8\3\2\k\9\d\d\h\d\2\c\q\w\z\u\m\r\8\z\8\9\x\6\0\x\e\l\g\z\z\q\0\s\6\d\4\l\1\t\6\z\c\v\e\5\e\g\d\t\b\h\p\l\s\c\0\8\2\d\o\m\f\q\d\w\m\p\u\2\n\n\4\2\z\r\p\x\q\a\1\u\9\q\3\2\i\l\r\q\n\m\6\x\e\0\n\y\y\i\h\0\q\3\z\m\4\l\4\1\h\3\t\f\w\t\k\6\t\i\z\h\d\7\n\v\3\c\i\w\7\b\d\g\5\1\e\s\0\c\n\0\e\8\f\r\5\t\l\n\6\h\o\h\m\2\v\h\y\l\u\5\g\i\b\a\6\k\x\4\0\b\c\9\g\1\i\d\p\v\4\3\j\i\w\b\s\d\x\q\3\9\2\l\j\6\v\h\c\n\m\t\0\i\0\8\f\1\w\i\t\0\n\c\l\2\h\h\7\b\i\3\p\r\h\x\4\f\o\i\t\x\5\5\o\4\3\d\4\w\6\f\z\y\l\m\7\g\y\5\e\0\x\x\t\b\9\4\j\n\2\q\2\t\8\5\d\i\4\0\2\4\7\d\3\g\4\d\x\y\t\3\r\q\7\o\w\p\z\l\0\h\m\2\c\d\m\r\o\r\d\v\6\y\w\8\v\n\a\h\l\n\7\a\a\i\h\n\m\s\g\f\8\w\l\f\o\u\u\2\b\c\e\q\t\o\w\s\f\y\b\r\t\d\h\6\8\g\r\8\t\9\l\d\a\w\d\e\j\9\8\q\a\m\o\u\y\9\q\9\8\w\w\m\r\1 ]] 00:28:59.579 05:11:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:59.579 05:11:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:59.579 [2024-04-27 05:11:29.267021] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:59.579 [2024-04-27 05:11:29.267257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147056 ] 00:28:59.579 [2024-04-27 05:11:29.424187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.838 [2024-04-27 05:11:29.530291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.406  Copying: 512/512 [B] (average 125 kBps) 00:29:00.406 00:29:00.406 05:11:30 -- dd/posix.sh@93 -- # [[ plqyj6wkle5iwmvobdk48oqbv0w50kkcr0z0dg3mbgtwyj0ig841oqc7nxqlub7xh6s4v46pgfb1ad3zgtqlbva46zm8mgvagetu1jurhji1cla2cu0kxoeu4nsio97eq0ha0zkac832k9ddhd2cqwzumr8z89x60xelgzzq0s6d4l1t6zcve5egdtbhplsc082domfqdwmpu2nn42zrpxqa1u9q32ilrqnm6xe0nyyih0q3zm4l41h3tfwtk6tizhd7nv3ciw7bdg51es0cn0e8fr5tln6hohm2vhylu5giba6kx40bc9g1idpv43jiwbsdxq392lj6vhcnmt0i08f1wit0ncl2hh7bi3prhx4foitx55o43d4w6fzylm7gy5e0xxtb94jn2q2t85di40247d3g4dxyt3rq7owpzl0hm2cdmrordv6yw8vnahln7aaihnmsgf8wlfouu2bceqtowsfybrtdh68gr8t9ldawdej98qamouy9q98wwmr1 == \p\l\q\y\j\6\w\k\l\e\5\i\w\m\v\o\b\d\k\4\8\o\q\b\v\0\w\5\0\k\k\c\r\0\z\0\d\g\3\m\b\g\t\w\y\j\0\i\g\8\4\1\o\q\c\7\n\x\q\l\u\b\7\x\h\6\s\4\v\4\6\p\g\f\b\1\a\d\3\z\g\t\q\l\b\v\a\4\6\z\m\8\m\g\v\a\g\e\t\u\1\j\u\r\h\j\i\1\c\l\a\2\c\u\0\k\x\o\e\u\4\n\s\i\o\9\7\e\q\0\h\a\0\z\k\a\c\8\3\2\k\9\d\d\h\d\2\c\q\w\z\u\m\r\8\z\8\9\x\6\0\x\e\l\g\z\z\q\0\s\6\d\4\l\1\t\6\z\c\v\e\5\e\g\d\t\b\h\p\l\s\c\0\8\2\d\o\m\f\q\d\w\m\p\u\2\n\n\4\2\z\r\p\x\q\a\1\u\9\q\3\2\i\l\r\q\n\m\6\x\e\0\n\y\y\i\h\0\q\3\z\m\4\l\4\1\h\3\t\f\w\t\k\6\t\i\z\h\d\7\n\v\3\c\i\w\7\b\d\g\5\1\e\s\0\c\n\0\e\8\f\r\5\t\l\n\6\h\o\h\m\2\v\h\y\l\u\5\g\i\b\a\6\k\x\4\0\b\c\9\g\1\i\d\p\v\4\3\j\i\w\b\s\d\x\q\3\9\2\l\j\6\v\h\c\n\m\t\0\i\0\8\f\1\w\i\t\0\n\c\l\2\h\h\7\b\i\3\p\r\h\x\4\f\o\i\t\x\5\5\o\4\3\d\4\w\6\f\z\y\l\m\7\g\y\5\e\0\x\x\t\b\9\4\j\n\2\q\2\t\8\5\d\i\4\0\2\4\7\d\3\g\4\d\x\y\t\3\r\q\7\o\w\p\z\l\0\h\m\2\c\d\m\r\o\r\d\v\6\y\w\8\v\n\a\h\l\n\7\a\a\i\h\n\m\s\g\f\8\w\l\f\o\u\u\2\b\c\e\q\t\o\w\s\f\y\b\r\t\d\h\6\8\g\r\8\t\9\l\d\a\w\d\e\j\9\8\q\a\m\o\u\y\9\q\9\8\w\w\m\r\1 ]] 00:29:00.406 00:29:00.406 real 0m7.720s 00:29:00.406 user 0m4.329s 00:29:00.406 sys 0m2.264s 00:29:00.406 05:11:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.406 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.406 ************************************ 00:29:00.406 END TEST dd_flags_misc_forced_aio 00:29:00.406 ************************************ 00:29:00.406 05:11:30 -- dd/posix.sh@1 -- # cleanup 00:29:00.406 05:11:30 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:00.406 05:11:30 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:00.406 00:29:00.406 real 0m32.692s 00:29:00.406 user 0m17.355s 00:29:00.406 sys 0m9.181s 00:29:00.406 05:11:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.406 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.406 ************************************ 00:29:00.406 END TEST spdk_dd_posix 00:29:00.406 ************************************ 00:29:00.406 05:11:30 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:00.406 05:11:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.406 05:11:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.406 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.406 ************************************ 00:29:00.406 START TEST spdk_dd_malloc 00:29:00.406 ************************************ 00:29:00.406 05:11:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:00.666 * Looking for test storage... 00:29:00.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:00.666 05:11:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.666 05:11:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.666 05:11:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.666 05:11:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.666 05:11:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.666 05:11:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.666 05:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.666 05:11:30 -- paths/export.sh@5 -- # export PATH 00:29:00.666 05:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.666 05:11:30 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:29:00.666 05:11:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.666 05:11:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.666 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.666 ************************************ 00:29:00.666 START TEST dd_malloc_copy 00:29:00.666 ************************************ 00:29:00.666 05:11:30 -- common/autotest_common.sh@1104 -- # malloc_copy 00:29:00.666 05:11:30 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:29:00.666 05:11:30 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:29:00.666 05:11:30 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:29:00.666 05:11:30 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:29:00.666 05:11:30 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:29:00.666 05:11:30 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:29:00.666 05:11:30 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:29:00.666 05:11:30 -- dd/malloc.sh@28 -- # gen_conf 00:29:00.666 05:11:30 -- dd/common.sh@31 -- # xtrace_disable 00:29:00.666 05:11:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.666 { 00:29:00.666 "subsystems": [ 00:29:00.666 { 00:29:00.666 "subsystem": "bdev", 00:29:00.666 "config": [ 00:29:00.666 { 00:29:00.666 "params": { 00:29:00.666 "block_size": 512, 00:29:00.666 "num_blocks": 1048576, 00:29:00.666 "name": "malloc0" 00:29:00.666 }, 00:29:00.666 "method": "bdev_malloc_create" 00:29:00.666 }, 00:29:00.666 { 00:29:00.666 "params": { 00:29:00.666 "block_size": 512, 00:29:00.666 "num_blocks": 1048576, 00:29:00.666 "name": "malloc1" 00:29:00.666 }, 00:29:00.666 "method": "bdev_malloc_create" 00:29:00.666 }, 00:29:00.666 { 00:29:00.666 "method": "bdev_wait_for_examine" 00:29:00.666 } 00:29:00.666 ] 00:29:00.666 } 00:29:00.666 ] 00:29:00.666 } 00:29:00.666 [2024-04-27 05:11:30.410646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:00.666 [2024-04-27 05:11:30.410921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147141 ] 00:29:00.666 [2024-04-27 05:11:30.582675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.925 [2024-04-27 05:11:30.687048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.270  Copying: 190/512 [MB] (190 MBps) Copying: 382/512 [MB] (191 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:29:05.270 00:29:05.270 05:11:35 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:29:05.270 05:11:35 -- dd/malloc.sh@33 -- # gen_conf 00:29:05.270 05:11:35 -- dd/common.sh@31 -- # xtrace_disable 00:29:05.270 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.270 { 00:29:05.270 "subsystems": [ 00:29:05.270 { 00:29:05.270 "subsystem": "bdev", 00:29:05.270 "config": [ 00:29:05.270 { 00:29:05.270 "params": { 00:29:05.270 "block_size": 512, 00:29:05.270 "num_blocks": 1048576, 00:29:05.270 "name": "malloc0" 00:29:05.270 }, 00:29:05.270 "method": "bdev_malloc_create" 00:29:05.270 }, 00:29:05.270 { 00:29:05.270 "params": { 00:29:05.270 "block_size": 512, 00:29:05.270 "num_blocks": 1048576, 00:29:05.270 "name": "malloc1" 00:29:05.270 }, 00:29:05.270 "method": "bdev_malloc_create" 00:29:05.270 }, 00:29:05.270 { 00:29:05.270 "method": "bdev_wait_for_examine" 00:29:05.270 } 00:29:05.270 ] 00:29:05.270 } 00:29:05.270 ] 00:29:05.270 } 00:29:05.270 [2024-04-27 05:11:35.093914] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:05.270 [2024-04-27 05:11:35.094167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147204 ] 00:29:05.529 [2024-04-27 05:11:35.262568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.529 [2024-04-27 05:11:35.358338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.521  Copying: 203/512 [MB] (203 MBps) Copying: 402/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:29:09.521 00:29:09.521 00:29:09.521 real 0m9.044s 00:29:09.521 user 0m7.429s 00:29:09.521 sys 0m1.471s 00:29:09.521 05:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.521 05:11:39 -- common/autotest_common.sh@10 -- # set +x 00:29:09.521 ************************************ 00:29:09.521 END TEST dd_malloc_copy 00:29:09.521 ************************************ 00:29:09.521 00:29:09.521 real 0m9.176s 00:29:09.521 user 0m7.500s 00:29:09.521 sys 0m1.535s 00:29:09.521 05:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.521 ************************************ 00:29:09.521 05:11:39 -- common/autotest_common.sh@10 -- # set +x 00:29:09.521 END TEST spdk_dd_malloc 00:29:09.521 ************************************ 00:29:09.780 05:11:39 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:09.780 05:11:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:09.781 05:11:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.781 05:11:39 -- common/autotest_common.sh@10 -- # set +x 00:29:09.781 ************************************ 00:29:09.781 START TEST spdk_dd_bdev_to_bdev 00:29:09.781 ************************************ 00:29:09.781 05:11:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:09.781 * Looking for test storage... 00:29:09.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:09.781 05:11:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.781 05:11:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.781 05:11:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.781 05:11:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.781 05:11:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:09.781 05:11:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:09.781 05:11:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:09.781 05:11:39 -- paths/export.sh@5 -- # export PATH 00:29:09.781 05:11:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:29:09.781 05:11:39 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:29:09.781 [2024-04-27 05:11:39.623667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:09.781 [2024-04-27 05:11:39.623991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147321 ] 00:29:10.040 [2024-04-27 05:11:39.798169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.040 [2024-04-27 05:11:39.902959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.868  Copying: 256/256 [MB] (average 1080 MBps) 00:29:10.868 00:29:10.868 05:11:40 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:10.868 05:11:40 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:10.868 05:11:40 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:29:10.868 05:11:40 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:29:10.868 05:11:40 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:10.868 05:11:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:10.868 05:11:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.868 05:11:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 ************************************ 00:29:10.868 START TEST dd_inflate_file 00:29:10.868 ************************************ 00:29:10.868 05:11:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:10.868 [2024-04-27 05:11:40.751189] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:10.868 [2024-04-27 05:11:40.751518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147344 ] 00:29:11.127 [2024-04-27 05:11:40.927558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.127 [2024-04-27 05:11:41.042003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.954  Copying: 64/64 [MB] (average 744 MBps) 00:29:11.954 00:29:11.954 00:29:11.954 real 0m0.934s 00:29:11.954 user 0m0.467s 00:29:11.954 sys 0m0.338s 00:29:11.954 05:11:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.954 05:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.954 ************************************ 00:29:11.954 END TEST dd_inflate_file 00:29:11.954 ************************************ 00:29:11.954 05:11:41 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:29:11.954 05:11:41 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:29:11.954 05:11:41 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:11.954 05:11:41 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:29:11.954 05:11:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:11.954 05:11:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.954 05:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.954 05:11:41 -- dd/common.sh@31 -- # xtrace_disable 00:29:11.954 05:11:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.954 ************************************ 00:29:11.954 START TEST dd_copy_to_out_bdev 00:29:11.954 ************************************ 00:29:11.954 05:11:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:11.954 { 00:29:11.954 "subsystems": [ 00:29:11.954 { 00:29:11.954 "subsystem": "bdev", 00:29:11.954 "config": [ 00:29:11.954 { 00:29:11.954 "params": { 00:29:11.954 "block_size": 4096, 00:29:11.954 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:11.954 "name": "aio1" 00:29:11.954 }, 00:29:11.954 "method": "bdev_aio_create" 00:29:11.954 }, 00:29:11.954 { 00:29:11.954 "params": { 00:29:11.954 "trtype": "pcie", 00:29:11.954 "traddr": "0000:00:06.0", 00:29:11.954 "name": "Nvme0" 00:29:11.954 }, 00:29:11.954 "method": "bdev_nvme_attach_controller" 00:29:11.954 }, 00:29:11.954 { 00:29:11.954 "method": "bdev_wait_for_examine" 00:29:11.954 } 00:29:11.954 ] 00:29:11.954 } 00:29:11.954 ] 00:29:11.954 } 00:29:11.954 [2024-04-27 05:11:41.741954] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:11.954 [2024-04-27 05:11:41.742257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147391 ] 00:29:12.213 [2024-04-27 05:11:41.911698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.213 [2024-04-27 05:11:42.019799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.415  Copying: 44/64 [MB] (44 MBps) Copying: 64/64 [MB] (average 44 MBps) 00:29:14.415 00:29:14.415 00:29:14.415 real 0m2.454s 00:29:14.415 user 0m1.982s 00:29:14.415 sys 0m0.369s 00:29:14.415 05:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.415 ************************************ 00:29:14.415 05:11:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.415 END TEST dd_copy_to_out_bdev 00:29:14.415 ************************************ 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:29:14.415 05:11:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.415 05:11:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.415 05:11:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.415 ************************************ 00:29:14.415 START TEST dd_offset_magic 00:29:14.415 ************************************ 00:29:14.415 05:11:44 -- common/autotest_common.sh@1104 -- # offset_magic 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:29:14.415 05:11:44 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:14.416 05:11:44 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:29:14.416 05:11:44 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:14.416 05:11:44 -- dd/common.sh@31 -- # xtrace_disable 00:29:14.416 05:11:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.416 [2024-04-27 05:11:44.255934] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:14.416 [2024-04-27 05:11:44.256863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147450 ] 00:29:14.416 { 00:29:14.416 "subsystems": [ 00:29:14.416 { 00:29:14.416 "subsystem": "bdev", 00:29:14.416 "config": [ 00:29:14.416 { 00:29:14.416 "params": { 00:29:14.416 "block_size": 4096, 00:29:14.416 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:14.416 "name": "aio1" 00:29:14.416 }, 00:29:14.416 "method": "bdev_aio_create" 00:29:14.416 }, 00:29:14.416 { 00:29:14.416 "params": { 00:29:14.416 "trtype": "pcie", 00:29:14.416 "traddr": "0000:00:06.0", 00:29:14.416 "name": "Nvme0" 00:29:14.416 }, 00:29:14.416 "method": "bdev_nvme_attach_controller" 00:29:14.416 }, 00:29:14.416 { 00:29:14.416 "method": "bdev_wait_for_examine" 00:29:14.416 } 00:29:14.416 ] 00:29:14.416 } 00:29:14.416 ] 00:29:14.416 } 00:29:14.674 [2024-04-27 05:11:44.425798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.674 [2024-04-27 05:11:44.514316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.870  Copying: 65/65 [MB] (average 128 MBps) 00:29:15.870 00:29:15.870 05:11:45 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:29:15.870 05:11:45 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:15.870 05:11:45 -- dd/common.sh@31 -- # xtrace_disable 00:29:15.870 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:29:15.870 [2024-04-27 05:11:45.746756] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:15.870 [2024-04-27 05:11:45.747011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147475 ] 00:29:15.870 { 00:29:15.870 "subsystems": [ 00:29:15.870 { 00:29:15.870 "subsystem": "bdev", 00:29:15.870 "config": [ 00:29:15.870 { 00:29:15.870 "params": { 00:29:15.870 "block_size": 4096, 00:29:15.870 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:15.870 "name": "aio1" 00:29:15.870 }, 00:29:15.870 "method": "bdev_aio_create" 00:29:15.870 }, 00:29:15.870 { 00:29:15.870 "params": { 00:29:15.870 "trtype": "pcie", 00:29:15.870 "traddr": "0000:00:06.0", 00:29:15.870 "name": "Nvme0" 00:29:15.870 }, 00:29:15.870 "method": "bdev_nvme_attach_controller" 00:29:15.870 }, 00:29:15.870 { 00:29:15.870 "method": "bdev_wait_for_examine" 00:29:15.870 } 00:29:15.870 ] 00:29:15.870 } 00:29:15.870 ] 00:29:15.870 } 00:29:16.129 [2024-04-27 05:11:45.920901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.129 [2024-04-27 05:11:46.014820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.956  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:16.956 00:29:16.956 05:11:46 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:16.956 05:11:46 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:16.956 05:11:46 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:16.956 05:11:46 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:29:16.956 05:11:46 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:16.956 05:11:46 -- dd/common.sh@31 -- # xtrace_disable 00:29:16.956 05:11:46 -- common/autotest_common.sh@10 -- # set +x 00:29:16.956 [2024-04-27 05:11:46.768500] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:16.956 [2024-04-27 05:11:46.768782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147496 ] 00:29:16.956 { 00:29:16.956 "subsystems": [ 00:29:16.956 { 00:29:16.956 "subsystem": "bdev", 00:29:16.956 "config": [ 00:29:16.956 { 00:29:16.956 "params": { 00:29:16.956 "block_size": 4096, 00:29:16.956 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:16.956 "name": "aio1" 00:29:16.956 }, 00:29:16.956 "method": "bdev_aio_create" 00:29:16.956 }, 00:29:16.956 { 00:29:16.956 "params": { 00:29:16.956 "trtype": "pcie", 00:29:16.956 "traddr": "0000:00:06.0", 00:29:16.956 "name": "Nvme0" 00:29:16.956 }, 00:29:16.956 "method": "bdev_nvme_attach_controller" 00:29:16.956 }, 00:29:16.956 { 00:29:16.956 "method": "bdev_wait_for_examine" 00:29:16.956 } 00:29:16.956 ] 00:29:16.956 } 00:29:16.956 ] 00:29:16.956 } 00:29:17.215 [2024-04-27 05:11:46.938859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.215 [2024-04-27 05:11:47.046897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.375  Copying: 65/65 [MB] (average 169 MBps) 00:29:18.375 00:29:18.375 05:11:48 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:29:18.375 05:11:48 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:18.375 05:11:48 -- dd/common.sh@31 -- # xtrace_disable 00:29:18.375 05:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.375 [2024-04-27 05:11:48.131287] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:18.375 [2024-04-27 05:11:48.131530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147518 ] 00:29:18.375 { 00:29:18.375 "subsystems": [ 00:29:18.375 { 00:29:18.375 "subsystem": "bdev", 00:29:18.375 "config": [ 00:29:18.375 { 00:29:18.375 "params": { 00:29:18.375 "block_size": 4096, 00:29:18.375 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:18.375 "name": "aio1" 00:29:18.375 }, 00:29:18.375 "method": "bdev_aio_create" 00:29:18.375 }, 00:29:18.375 { 00:29:18.375 "params": { 00:29:18.375 "trtype": "pcie", 00:29:18.375 "traddr": "0000:00:06.0", 00:29:18.375 "name": "Nvme0" 00:29:18.375 }, 00:29:18.375 "method": "bdev_nvme_attach_controller" 00:29:18.375 }, 00:29:18.375 { 00:29:18.375 "method": "bdev_wait_for_examine" 00:29:18.375 } 00:29:18.375 ] 00:29:18.375 } 00:29:18.375 ] 00:29:18.375 } 00:29:18.634 [2024-04-27 05:11:48.303060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.634 [2024-04-27 05:11:48.420356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.461  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:19.461 00:29:19.461 05:11:49 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:19.461 05:11:49 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:19.461 00:29:19.461 real 0m4.902s 00:29:19.461 user 0m2.496s 00:29:19.461 sys 0m1.200s 00:29:19.461 05:11:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.461 ************************************ 00:29:19.461 END TEST dd_offset_magic 00:29:19.461 ************************************ 00:29:19.461 05:11:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.461 05:11:49 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:29:19.461 05:11:49 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:29:19.461 05:11:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:19.461 05:11:49 -- dd/common.sh@11 -- # local nvme_ref= 00:29:19.461 05:11:49 -- dd/common.sh@12 -- # local size=4194330 00:29:19.461 05:11:49 -- dd/common.sh@14 -- # local bs=1048576 00:29:19.461 05:11:49 -- dd/common.sh@15 -- # local count=5 00:29:19.461 05:11:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:29:19.461 05:11:49 -- dd/common.sh@18 -- # gen_conf 00:29:19.461 05:11:49 -- dd/common.sh@31 -- # xtrace_disable 00:29:19.461 05:11:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.461 { 00:29:19.461 "subsystems": [ 00:29:19.461 { 00:29:19.461 "subsystem": "bdev", 00:29:19.461 "config": [ 00:29:19.461 { 00:29:19.461 "params": { 00:29:19.461 "block_size": 4096, 00:29:19.461 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:19.461 "name": "aio1" 00:29:19.461 }, 00:29:19.461 "method": "bdev_aio_create" 00:29:19.461 }, 00:29:19.461 { 00:29:19.461 "params": { 00:29:19.461 "trtype": "pcie", 00:29:19.461 "traddr": "0000:00:06.0", 00:29:19.461 "name": "Nvme0" 00:29:19.461 }, 00:29:19.461 "method": "bdev_nvme_attach_controller" 00:29:19.461 }, 00:29:19.461 { 00:29:19.461 "method": "bdev_wait_for_examine" 00:29:19.461 } 00:29:19.461 ] 00:29:19.461 } 00:29:19.461 ] 00:29:19.461 } 00:29:19.461 [2024-04-27 05:11:49.205121] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:19.461 [2024-04-27 05:11:49.205940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147561 ] 00:29:19.461 [2024-04-27 05:11:49.374971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.721 [2024-04-27 05:11:49.460839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.239  Copying: 5120/5120 [kB] (average 1000 MBps) 00:29:20.239 00:29:20.239 05:11:50 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:29:20.239 05:11:50 -- dd/common.sh@10 -- # local bdev=aio1 00:29:20.239 05:11:50 -- dd/common.sh@11 -- # local nvme_ref= 00:29:20.239 05:11:50 -- dd/common.sh@12 -- # local size=4194330 00:29:20.239 05:11:50 -- dd/common.sh@14 -- # local bs=1048576 00:29:20.239 05:11:50 -- dd/common.sh@15 -- # local count=5 00:29:20.239 05:11:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:29:20.239 05:11:50 -- dd/common.sh@18 -- # gen_conf 00:29:20.239 05:11:50 -- dd/common.sh@31 -- # xtrace_disable 00:29:20.239 05:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 { 00:29:20.239 "subsystems": [ 00:29:20.239 { 00:29:20.239 "subsystem": "bdev", 00:29:20.239 "config": [ 00:29:20.239 { 00:29:20.240 "params": { 00:29:20.240 "block_size": 4096, 00:29:20.240 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:20.240 "name": "aio1" 00:29:20.240 }, 00:29:20.240 "method": "bdev_aio_create" 00:29:20.240 }, 00:29:20.240 { 00:29:20.240 "params": { 00:29:20.240 "trtype": "pcie", 00:29:20.240 "traddr": "0000:00:06.0", 00:29:20.240 "name": "Nvme0" 00:29:20.240 }, 00:29:20.240 "method": "bdev_nvme_attach_controller" 00:29:20.240 }, 00:29:20.240 { 00:29:20.240 "method": "bdev_wait_for_examine" 00:29:20.240 } 00:29:20.240 ] 00:29:20.240 } 00:29:20.240 ] 00:29:20.240 } 00:29:20.240 [2024-04-27 05:11:50.141333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:20.240 [2024-04-27 05:11:50.141589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147576 ] 00:29:20.499 [2024-04-27 05:11:50.311484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.499 [2024-04-27 05:11:50.414603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.325  Copying: 5120/5120 [kB] (average 192 MBps) 00:29:21.325 00:29:21.325 05:11:51 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:21.325 ************************************ 00:29:21.325 END TEST spdk_dd_bdev_to_bdev 00:29:21.325 ************************************ 00:29:21.325 00:29:21.325 real 0m11.655s 00:29:21.325 user 0m6.740s 00:29:21.325 sys 0m3.073s 00:29:21.325 05:11:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.325 05:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.325 05:11:51 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:21.325 05:11:51 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:21.325 05:11:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.325 05:11:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.325 05:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.325 ************************************ 00:29:21.325 START TEST spdk_dd_sparse 00:29:21.325 ************************************ 00:29:21.325 05:11:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:21.585 * Looking for test storage... 00:29:21.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:21.585 05:11:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.585 05:11:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.585 05:11:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.585 05:11:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.585 05:11:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.585 05:11:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.585 05:11:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.586 05:11:51 -- paths/export.sh@5 -- # export PATH 00:29:21.586 05:11:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:21.586 05:11:51 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:21.586 05:11:51 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:21.586 05:11:51 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:21.586 05:11:51 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:21.586 05:11:51 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:21.586 05:11:51 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:21.586 05:11:51 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:21.586 05:11:51 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:21.586 05:11:51 -- dd/sparse.sh@118 -- # prepare 00:29:21.586 05:11:51 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:21.586 05:11:51 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:21.586 1+0 records in 00:29:21.586 1+0 records out 00:29:21.586 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00866417 s, 484 MB/s 00:29:21.586 05:11:51 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:21.586 1+0 records in 00:29:21.586 1+0 records out 00:29:21.586 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00863994 s, 485 MB/s 00:29:21.586 05:11:51 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:21.586 1+0 records in 00:29:21.586 1+0 records out 00:29:21.586 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00935014 s, 449 MB/s 00:29:21.586 05:11:51 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:21.586 05:11:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:21.586 05:11:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.586 05:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.586 ************************************ 00:29:21.586 START TEST dd_sparse_file_to_file 00:29:21.586 ************************************ 00:29:21.586 05:11:51 -- common/autotest_common.sh@1104 -- # file_to_file 00:29:21.586 05:11:51 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:21.586 05:11:51 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:21.586 05:11:51 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:21.586 05:11:51 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:21.586 05:11:51 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:29:21.586 05:11:51 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:21.586 05:11:51 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:21.586 05:11:51 -- dd/sparse.sh@41 -- # gen_conf 00:29:21.586 05:11:51 -- dd/common.sh@31 -- # xtrace_disable 00:29:21.586 05:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.586 { 00:29:21.586 "subsystems": [ 00:29:21.586 { 00:29:21.586 "subsystem": "bdev", 00:29:21.586 "config": [ 00:29:21.586 { 00:29:21.586 "params": { 00:29:21.586 "block_size": 4096, 00:29:21.586 "filename": "dd_sparse_aio_disk", 00:29:21.586 "name": "dd_aio" 00:29:21.586 }, 00:29:21.586 "method": "bdev_aio_create" 00:29:21.586 }, 00:29:21.586 { 00:29:21.586 "params": { 00:29:21.586 "lvs_name": "dd_lvstore", 00:29:21.586 "bdev_name": "dd_aio" 00:29:21.586 }, 00:29:21.586 "method": "bdev_lvol_create_lvstore" 00:29:21.586 }, 00:29:21.586 { 00:29:21.586 "method": "bdev_wait_for_examine" 00:29:21.586 } 00:29:21.586 ] 00:29:21.586 } 00:29:21.586 ] 00:29:21.586 } 00:29:21.586 [2024-04-27 05:11:51.399862] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:21.586 [2024-04-27 05:11:51.400365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147695 ] 00:29:21.844 [2024-04-27 05:11:51.580857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.844 [2024-04-27 05:11:51.679526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.671  Copying: 12/36 [MB] (average 857 MBps) 00:29:22.671 00:29:22.671 05:11:52 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:22.671 05:11:52 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:22.671 05:11:52 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:22.671 05:11:52 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:22.671 05:11:52 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:22.671 05:11:52 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:22.671 05:11:52 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:22.671 05:11:52 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:22.671 05:11:52 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:22.671 05:11:52 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:22.671 00:29:22.671 real 0m1.012s 00:29:22.671 user 0m0.510s 00:29:22.671 sys 0m0.353s 00:29:22.671 05:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.671 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 ************************************ 00:29:22.671 END TEST dd_sparse_file_to_file 00:29:22.671 ************************************ 00:29:22.671 05:11:52 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:22.671 05:11:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:22.671 05:11:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.671 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 ************************************ 00:29:22.671 START TEST dd_sparse_file_to_bdev 00:29:22.671 ************************************ 00:29:22.671 05:11:52 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:29:22.671 05:11:52 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:22.671 05:11:52 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:22.671 05:11:52 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:29:22.671 05:11:52 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:22.671 05:11:52 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:22.671 05:11:52 -- dd/sparse.sh@73 -- # gen_conf 00:29:22.671 05:11:52 -- dd/common.sh@31 -- # xtrace_disable 00:29:22.671 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 [2024-04-27 05:11:52.453518] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:22.671 [2024-04-27 05:11:52.453840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147950 ] 00:29:22.671 { 00:29:22.671 "subsystems": [ 00:29:22.671 { 00:29:22.671 "subsystem": "bdev", 00:29:22.671 "config": [ 00:29:22.671 { 00:29:22.671 "params": { 00:29:22.671 "block_size": 4096, 00:29:22.671 "filename": "dd_sparse_aio_disk", 00:29:22.671 "name": "dd_aio" 00:29:22.671 }, 00:29:22.671 "method": "bdev_aio_create" 00:29:22.671 }, 00:29:22.671 { 00:29:22.671 "params": { 00:29:22.671 "lvs_name": "dd_lvstore", 00:29:22.671 "lvol_name": "dd_lvol", 00:29:22.671 "size": 37748736, 00:29:22.671 "thin_provision": true 00:29:22.671 }, 00:29:22.671 "method": "bdev_lvol_create" 00:29:22.671 }, 00:29:22.671 { 00:29:22.671 "method": "bdev_wait_for_examine" 00:29:22.671 } 00:29:22.671 ] 00:29:22.671 } 00:29:22.671 ] 00:29:22.671 } 00:29:22.930 [2024-04-27 05:11:52.623051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.930 [2024-04-27 05:11:52.715320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.930 [2024-04-27 05:11:52.847754] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:23.201  Copying: 12/36 [MB] (average 750 MBps)[2024-04-27 05:11:52.886989] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:23.459 00:29:23.459 00:29:23.459 00:29:23.459 real 0m0.940s 00:29:23.459 user 0m0.546s 00:29:23.459 sys 0m0.294s 00:29:23.459 05:11:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.459 ************************************ 00:29:23.459 END TEST dd_sparse_file_to_bdev 00:29:23.459 ************************************ 00:29:23.459 05:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:23.459 05:11:53 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:23.459 05:11:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:23.459 05:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.459 05:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:23.717 ************************************ 00:29:23.717 START TEST dd_sparse_bdev_to_file 00:29:23.717 ************************************ 00:29:23.717 05:11:53 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:29:23.717 05:11:53 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:23.717 05:11:53 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:23.717 05:11:53 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:23.718 05:11:53 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:23.718 05:11:53 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:23.718 05:11:53 -- dd/sparse.sh@91 -- # gen_conf 00:29:23.718 05:11:53 -- dd/common.sh@31 -- # xtrace_disable 00:29:23.718 05:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:23.718 { 00:29:23.718 "subsystems": [ 00:29:23.718 { 00:29:23.718 "subsystem": "bdev", 00:29:23.718 "config": [ 00:29:23.718 { 00:29:23.718 "params": { 00:29:23.718 "block_size": 4096, 00:29:23.718 "filename": "dd_sparse_aio_disk", 00:29:23.718 "name": "dd_aio" 00:29:23.718 }, 00:29:23.718 "method": "bdev_aio_create" 00:29:23.718 }, 00:29:23.718 { 00:29:23.718 "method": "bdev_wait_for_examine" 00:29:23.718 } 00:29:23.718 ] 00:29:23.718 } 00:29:23.718 ] 00:29:23.718 } 00:29:23.718 [2024-04-27 05:11:53.452107] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:23.718 [2024-04-27 05:11:53.452640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147995 ] 00:29:23.718 [2024-04-27 05:11:53.626822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.976 [2024-04-27 05:11:53.722479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.543  Copying: 12/36 [MB] (average 857 MBps) 00:29:24.543 00:29:24.543 05:11:54 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:24.543 05:11:54 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:24.543 05:11:54 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:24.543 05:11:54 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:24.543 05:11:54 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:24.543 05:11:54 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:24.543 05:11:54 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:24.543 05:11:54 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:24.543 ************************************ 00:29:24.543 END TEST dd_sparse_bdev_to_file 00:29:24.543 ************************************ 00:29:24.543 05:11:54 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:24.543 05:11:54 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:24.543 00:29:24.543 real 0m0.922s 00:29:24.543 user 0m0.529s 00:29:24.543 sys 0m0.285s 00:29:24.543 05:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.543 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.543 05:11:54 -- dd/sparse.sh@1 -- # cleanup 00:29:24.543 05:11:54 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:24.543 05:11:54 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:24.543 05:11:54 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:24.543 05:11:54 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:24.543 ************************************ 00:29:24.543 END TEST spdk_dd_sparse 00:29:24.543 ************************************ 00:29:24.543 00:29:24.543 real 0m3.175s 00:29:24.543 user 0m1.720s 00:29:24.543 sys 0m1.096s 00:29:24.543 05:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.543 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.543 05:11:54 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:24.543 05:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.543 05:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.543 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.543 ************************************ 00:29:24.543 START TEST spdk_dd_negative 00:29:24.543 ************************************ 00:29:24.543 05:11:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:24.802 * Looking for test storage... 00:29:24.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:24.802 05:11:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.802 05:11:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.802 05:11:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.802 05:11:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.802 05:11:54 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.803 05:11:54 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.803 05:11:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.803 05:11:54 -- paths/export.sh@5 -- # export PATH 00:29:24.803 05:11:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.803 05:11:54 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:24.803 05:11:54 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:24.803 05:11:54 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:24.803 05:11:54 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:24.803 05:11:54 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:24.803 05:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.803 05:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.803 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.803 ************************************ 00:29:24.803 START TEST dd_invalid_arguments 00:29:24.803 ************************************ 00:29:24.803 05:11:54 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:24.803 05:11:54 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:24.803 05:11:54 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.803 05:11:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:24.803 05:11:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.803 05:11:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.803 05:11:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.803 05:11:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:24.803 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:24.803 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:24.803 options: 00:29:24.803 -c, --config JSON config file (default none) 00:29:24.803 --json JSON config file (default none) 00:29:24.803 --json-ignore-init-errors 00:29:24.803 don't exit on invalid config entry 00:29:24.803 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:24.803 -g, --single-file-segments 00:29:24.803 force creating just one hugetlbfs file 00:29:24.803 -h, --help show this usage 00:29:24.803 -i, --shm-id shared memory ID (optional) 00:29:24.803 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:24.803 --lcores lcore to CPU mapping list. The list is in the format: 00:29:24.803 [<,lcores[@CPUs]>...] 00:29:24.803 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:24.803 Within the group, '-' is used for range separator, 00:29:24.803 ',' is used for single number separator. 00:29:24.803 '( )' can be omitted for single element group, 00:29:24.803 '@' can be omitted if cpus and lcores have the same value 00:29:24.803 -n, --mem-channels channel number of memory channels used for DPDK 00:29:24.803 -p, --main-core main (primary) core for DPDK 00:29:24.803 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:24.803 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:24.803 --disable-cpumask-locks Disable CPU core lock files. 00:29:24.803 --silence-noticelog disable notice level logging to stderr 00:29:24.803 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:24.803 -u, --no-pci disable PCI access 00:29:24.803 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:24.803 --max-delay maximum reactor delay (in microseconds) 00:29:24.803 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:24.803 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:24.803 -R, --huge-unlink unlink huge files after initialization 00:29:24.803 -v, --version print SPDK version 00:29:24.803 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:24.803 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:24.803 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:24.803 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:24.803 Tracepoints vary in size and can use more than one trace entry. 00:29:24.803 --rpcs-allowed comma-separated list of permitted RPCS 00:29:24.803 --env-context Opaque context for use of the env implementation 00:29:24.803 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:24.803 --no-huge run without using hugepages 00:29:24.803 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:24.803 -e, --tpoint-group [:] 00:29:24.803 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:24.803 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:24.803 Groups and [2024-04-27 05:11:54.584671] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:24.803 masks can be combined (e.g. thread,bdev:0x1). 00:29:24.803 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:24.803 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:24.803 [--------- DD Options ---------] 00:29:24.803 --if Input file. Must specify either --if or --ib. 00:29:24.803 --ib Input bdev. Must specifier either --if or --ib 00:29:24.803 --of Output file. Must specify either --of or --ob. 00:29:24.803 --ob Output bdev. Must specify either --of or --ob. 00:29:24.803 --iflag Input file flags. 00:29:24.803 --oflag Output file flags. 00:29:24.803 --bs I/O unit size (default: 4096) 00:29:24.803 --qd Queue depth (default: 2) 00:29:24.803 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:24.803 --skip Skip this many I/O units at start of input. (default: 0) 00:29:24.803 --seek Skip this many I/O units at start of output. (default: 0) 00:29:24.803 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:24.803 --sparse Enable hole skipping in input target 00:29:24.803 Available iflag and oflag values: 00:29:24.803 append - append mode 00:29:24.803 direct - use direct I/O for data 00:29:24.803 directory - fail unless a directory 00:29:24.803 dsync - use synchronized I/O for data 00:29:24.803 noatime - do not update access time 00:29:24.803 noctty - do not assign controlling terminal from file 00:29:24.803 nofollow - do not follow symlinks 00:29:24.803 nonblock - use non-blocking I/O 00:29:24.803 sync - use synchronized I/O for data and metadata 00:29:24.803 05:11:54 -- common/autotest_common.sh@643 -- # es=2 00:29:24.803 05:11:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.803 05:11:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.803 05:11:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.803 00:29:24.803 real 0m0.104s 00:29:24.803 user 0m0.056s 00:29:24.803 sys 0m0.046s 00:29:24.803 05:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.803 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.803 ************************************ 00:29:24.803 END TEST dd_invalid_arguments 00:29:24.803 ************************************ 00:29:24.803 05:11:54 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:24.803 05:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.803 05:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.803 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.803 ************************************ 00:29:24.803 START TEST dd_double_input 00:29:24.803 ************************************ 00:29:24.803 05:11:54 -- common/autotest_common.sh@1104 -- # double_input 00:29:24.803 05:11:54 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:24.803 05:11:54 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.803 05:11:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:24.803 05:11:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.803 05:11:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.803 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.804 05:11:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:24.804 05:11:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:24.804 05:11:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:25.062 [2024-04-27 05:11:54.738885] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:25.062 05:11:54 -- common/autotest_common.sh@643 -- # es=22 00:29:25.062 05:11:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.062 05:11:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.062 05:11:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.062 00:29:25.062 real 0m0.105s 00:29:25.062 user 0m0.063s 00:29:25.062 sys 0m0.042s 00:29:25.062 05:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.062 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.062 ************************************ 00:29:25.062 END TEST dd_double_input 00:29:25.062 ************************************ 00:29:25.062 05:11:54 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:25.062 05:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.062 05:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.062 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.062 ************************************ 00:29:25.062 START TEST dd_double_output 00:29:25.062 ************************************ 00:29:25.062 05:11:54 -- common/autotest_common.sh@1104 -- # double_output 00:29:25.062 05:11:54 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:25.062 05:11:54 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.062 05:11:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:25.062 05:11:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.062 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.062 05:11:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.062 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.062 05:11:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.062 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.062 05:11:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.062 05:11:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:25.062 05:11:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:25.062 [2024-04-27 05:11:54.893413] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:25.062 05:11:54 -- common/autotest_common.sh@643 -- # es=22 00:29:25.062 05:11:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.062 05:11:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.062 05:11:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.062 00:29:25.062 real 0m0.097s 00:29:25.062 user 0m0.052s 00:29:25.062 sys 0m0.046s 00:29:25.062 05:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.062 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.062 ************************************ 00:29:25.062 END TEST dd_double_output 00:29:25.062 ************************************ 00:29:25.062 05:11:54 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:25.062 05:11:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.062 05:11:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.062 05:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:25.321 ************************************ 00:29:25.321 START TEST dd_no_input 00:29:25.322 ************************************ 00:29:25.322 05:11:54 -- common/autotest_common.sh@1104 -- # no_input 00:29:25.322 05:11:54 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:25.322 05:11:54 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.322 05:11:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:25.322 05:11:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:25.322 05:11:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:25.322 [2024-04-27 05:11:55.040618] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:25.322 05:11:55 -- common/autotest_common.sh@643 -- # es=22 00:29:25.322 05:11:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.322 05:11:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.322 05:11:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.322 00:29:25.322 real 0m0.100s 00:29:25.322 user 0m0.062s 00:29:25.322 sys 0m0.038s 00:29:25.322 05:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.322 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.322 ************************************ 00:29:25.322 END TEST dd_no_input 00:29:25.322 ************************************ 00:29:25.322 05:11:55 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:25.322 05:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.322 05:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.322 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.322 ************************************ 00:29:25.322 START TEST dd_no_output 00:29:25.322 ************************************ 00:29:25.322 05:11:55 -- common/autotest_common.sh@1104 -- # no_output 00:29:25.322 05:11:55 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:25.322 05:11:55 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.322 05:11:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:25.322 05:11:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.322 05:11:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.322 05:11:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:25.322 05:11:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:25.322 [2024-04-27 05:11:55.191927] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:25.322 05:11:55 -- common/autotest_common.sh@643 -- # es=22 00:29:25.322 05:11:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.322 05:11:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.322 05:11:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.322 00:29:25.322 real 0m0.101s 00:29:25.322 user 0m0.057s 00:29:25.322 sys 0m0.044s 00:29:25.322 05:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.322 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.322 ************************************ 00:29:25.322 END TEST dd_no_output 00:29:25.322 ************************************ 00:29:25.580 05:11:55 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:25.580 05:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.581 05:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.581 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.581 ************************************ 00:29:25.581 START TEST dd_wrong_blocksize 00:29:25.581 ************************************ 00:29:25.581 05:11:55 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:25.581 05:11:55 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:25.581 05:11:55 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.581 05:11:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:25.581 05:11:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:25.581 05:11:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:25.581 [2024-04-27 05:11:55.343236] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:25.581 05:11:55 -- common/autotest_common.sh@643 -- # es=22 00:29:25.581 05:11:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.581 05:11:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.581 05:11:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.581 00:29:25.581 real 0m0.100s 00:29:25.581 user 0m0.039s 00:29:25.581 sys 0m0.061s 00:29:25.581 05:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.581 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.581 ************************************ 00:29:25.581 END TEST dd_wrong_blocksize 00:29:25.581 ************************************ 00:29:25.581 05:11:55 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:25.581 05:11:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.581 05:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.581 05:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.581 ************************************ 00:29:25.581 START TEST dd_smaller_blocksize 00:29:25.581 ************************************ 00:29:25.581 05:11:55 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:25.581 05:11:55 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:25.581 05:11:55 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.581 05:11:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:25.581 05:11:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.581 05:11:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:25.581 05:11:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:25.839 [2024-04-27 05:11:55.505939] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:25.840 [2024-04-27 05:11:55.506214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148247 ] 00:29:25.840 [2024-04-27 05:11:55.675142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.098 [2024-04-27 05:11:55.816312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.098 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:26.357 [2024-04-27 05:11:56.047503] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:26.357 [2024-04-27 05:11:56.047664] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:26.357 [2024-04-27 05:11:56.268172] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:26.616 05:11:56 -- common/autotest_common.sh@643 -- # es=244 00:29:26.616 05:11:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:26.616 05:11:56 -- common/autotest_common.sh@652 -- # es=116 00:29:26.616 05:11:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:26.616 05:11:56 -- common/autotest_common.sh@660 -- # es=1 00:29:26.616 05:11:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:26.616 00:29:26.616 real 0m0.975s 00:29:26.616 user 0m0.529s 00:29:26.616 sys 0m0.346s 00:29:26.616 05:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.616 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:26.616 ************************************ 00:29:26.616 END TEST dd_smaller_blocksize 00:29:26.616 ************************************ 00:29:26.616 05:11:56 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:26.616 05:11:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:26.616 05:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.616 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:26.616 ************************************ 00:29:26.616 START TEST dd_invalid_count 00:29:26.616 ************************************ 00:29:26.616 05:11:56 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:26.616 05:11:56 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:26.616 05:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:29:26.616 05:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:26.616 05:11:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.616 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.616 05:11:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.616 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.616 05:11:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.616 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.616 05:11:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.616 05:11:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:26.616 05:11:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:26.616 [2024-04-27 05:11:56.532439] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:26.876 05:11:56 -- common/autotest_common.sh@643 -- # es=22 00:29:26.876 05:11:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:26.876 05:11:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:26.876 05:11:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:26.876 00:29:26.876 real 0m0.116s 00:29:26.876 user 0m0.057s 00:29:26.876 sys 0m0.059s 00:29:26.876 05:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.876 ************************************ 00:29:26.876 END TEST dd_invalid_count 00:29:26.876 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:26.876 ************************************ 00:29:26.876 05:11:56 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:26.876 05:11:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:26.876 05:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.876 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:26.876 ************************************ 00:29:26.876 START TEST dd_invalid_oflag 00:29:26.876 ************************************ 00:29:26.876 05:11:56 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:26.876 05:11:56 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:26.876 05:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:29:26.876 05:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:26.876 05:11:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.876 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.876 05:11:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.876 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.876 05:11:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.876 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:26.876 05:11:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:26.876 05:11:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:26.876 05:11:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:26.876 [2024-04-27 05:11:56.700731] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:26.876 05:11:56 -- common/autotest_common.sh@643 -- # es=22 00:29:26.876 05:11:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:26.876 05:11:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:26.876 05:11:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:26.876 00:29:26.876 real 0m0.112s 00:29:26.876 user 0m0.061s 00:29:26.876 sys 0m0.051s 00:29:26.876 05:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.876 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:26.876 ************************************ 00:29:26.876 END TEST dd_invalid_oflag 00:29:26.876 ************************************ 00:29:26.876 05:11:56 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:26.876 05:11:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:26.876 05:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.876 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:27.141 ************************************ 00:29:27.141 START TEST dd_invalid_iflag 00:29:27.141 ************************************ 00:29:27.141 05:11:56 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:27.141 05:11:56 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:27.141 05:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:29:27.141 05:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:27.141 05:11:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:27.141 05:11:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:27.141 [2024-04-27 05:11:56.864508] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:27.141 05:11:56 -- common/autotest_common.sh@643 -- # es=22 00:29:27.141 05:11:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:27.141 05:11:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:27.141 05:11:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:27.141 00:29:27.141 real 0m0.109s 00:29:27.141 user 0m0.055s 00:29:27.141 sys 0m0.054s 00:29:27.141 05:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.141 ************************************ 00:29:27.141 END TEST dd_invalid_iflag 00:29:27.141 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:27.141 ************************************ 00:29:27.141 05:11:56 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:27.141 05:11:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:27.141 05:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.141 05:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:27.141 ************************************ 00:29:27.141 START TEST dd_unknown_flag 00:29:27.141 ************************************ 00:29:27.141 05:11:56 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:27.141 05:11:56 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:27.141 05:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:29:27.141 05:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:27.141 05:11:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:27.141 05:11:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:27.141 05:11:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:27.141 [2024-04-27 05:11:57.029465] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:27.141 [2024-04-27 05:11:57.029755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148362 ] 00:29:27.399 [2024-04-27 05:11:57.200133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.399 [2024-04-27 05:11:57.304597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.658 [2024-04-27 05:11:57.427660] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:27.658 [2024-04-27 05:11:57.427812] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:27.658 [2024-04-27 05:11:57.427853] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:27.658 [2024-04-27 05:11:57.427920] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:27.916 [2024-04-27 05:11:57.630702] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:27.916 05:11:57 -- common/autotest_common.sh@643 -- # es=236 00:29:27.916 05:11:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:27.917 05:11:57 -- common/autotest_common.sh@652 -- # es=108 00:29:27.917 05:11:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:27.917 05:11:57 -- common/autotest_common.sh@660 -- # es=1 00:29:27.917 05:11:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:27.917 00:29:27.917 real 0m0.821s 00:29:27.917 user 0m0.468s 00:29:27.917 sys 0m0.253s 00:29:27.917 ************************************ 00:29:27.917 END TEST dd_unknown_flag 00:29:27.917 ************************************ 00:29:27.917 05:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.917 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:27.917 05:11:57 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:27.917 05:11:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:27.917 05:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.917 05:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:28.175 ************************************ 00:29:28.176 START TEST dd_invalid_json 00:29:28.176 ************************************ 00:29:28.176 05:11:57 -- common/autotest_common.sh@1104 -- # invalid_json 00:29:28.176 05:11:57 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:28.176 05:11:57 -- common/autotest_common.sh@640 -- # local es=0 00:29:28.176 05:11:57 -- dd/negative_dd.sh@95 -- # : 00:29:28.176 05:11:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:28.176 05:11:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.176 05:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:28.176 05:11:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.176 05:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:28.176 05:11:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.176 05:11:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:28.176 05:11:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.176 05:11:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:28.176 05:11:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:28.176 [2024-04-27 05:11:57.897973] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:28.176 [2024-04-27 05:11:57.898180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148399 ] 00:29:28.176 [2024-04-27 05:11:58.054372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.434 [2024-04-27 05:11:58.134097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.434 [2024-04-27 05:11:58.134414] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:28.434 [2024-04-27 05:11:58.134490] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:28.434 [2024-04-27 05:11:58.134611] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:28.434 05:11:58 -- common/autotest_common.sh@643 -- # es=234 00:29:28.434 05:11:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:28.434 05:11:58 -- common/autotest_common.sh@652 -- # es=106 00:29:28.434 05:11:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:28.434 05:11:58 -- common/autotest_common.sh@660 -- # es=1 00:29:28.434 05:11:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:28.434 00:29:28.434 real 0m0.415s 00:29:28.434 user 0m0.193s 00:29:28.434 sys 0m0.124s 00:29:28.434 ************************************ 00:29:28.434 END TEST dd_invalid_json 00:29:28.434 ************************************ 00:29:28.434 05:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.434 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:28.434 ************************************ 00:29:28.434 END TEST spdk_dd_negative 00:29:28.434 ************************************ 00:29:28.434 00:29:28.434 real 0m3.871s 00:29:28.434 user 0m2.103s 00:29:28.434 sys 0m1.458s 00:29:28.434 05:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.434 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:28.434 00:29:28.434 real 1m28.385s 00:29:28.434 user 0m52.769s 00:29:28.434 sys 0m25.108s 00:29:28.434 05:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.434 ************************************ 00:29:28.434 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:28.434 END TEST spdk_dd 00:29:28.434 ************************************ 00:29:28.692 05:11:58 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:29:28.692 05:11:58 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:28.692 05:11:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:28.692 05:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:28.692 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:28.692 ************************************ 00:29:28.692 START TEST blockdev_nvme 00:29:28.692 ************************************ 00:29:28.692 05:11:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:28.692 * Looking for test storage... 00:29:28.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:28.692 05:11:58 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:28.692 05:11:58 -- bdev/nbd_common.sh@6 -- # set -e 00:29:28.692 05:11:58 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:28.692 05:11:58 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:28.692 05:11:58 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:28.692 05:11:58 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:28.692 05:11:58 -- bdev/blockdev.sh@18 -- # : 00:29:28.692 05:11:58 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:28.692 05:11:58 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:28.692 05:11:58 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:28.692 05:11:58 -- bdev/blockdev.sh@672 -- # uname -s 00:29:28.692 05:11:58 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:28.692 05:11:58 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:28.692 05:11:58 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:28.692 05:11:58 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:28.692 05:11:58 -- bdev/blockdev.sh@682 -- # dek= 00:29:28.692 05:11:58 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:28.692 05:11:58 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:28.692 05:11:58 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:28.692 05:11:58 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:28.692 05:11:58 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:28.692 05:11:58 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:28.692 05:11:58 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=148494 00:29:28.692 05:11:58 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:28.692 05:11:58 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:28.692 05:11:58 -- bdev/blockdev.sh@47 -- # waitforlisten 148494 00:29:28.692 05:11:58 -- common/autotest_common.sh@819 -- # '[' -z 148494 ']' 00:29:28.692 05:11:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.692 05:11:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.692 05:11:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.692 05:11:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.692 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:28.692 [2024-04-27 05:11:58.539090] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:28.692 [2024-04-27 05:11:58.539379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148494 ] 00:29:28.951 [2024-04-27 05:11:58.709049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.209 [2024-04-27 05:11:58.876810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:29.209 [2024-04-27 05:11:58.877160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.777 05:11:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.777 05:11:59 -- common/autotest_common.sh@852 -- # return 0 00:29:29.777 05:11:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:29.777 05:11:59 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:29.777 05:11:59 -- bdev/blockdev.sh@79 -- # local json 00:29:29.777 05:11:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:29.777 05:11:59 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:29.777 05:11:59 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@738 -- # cat 00:29:29.777 05:11:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:29.777 05:11:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:29.777 05:11:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:29.777 05:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.777 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:29.777 05:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.777 05:11:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:29.777 05:11:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "179f1e74-b0b0-42bd-b728-70b51c5577c4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "179f1e74-b0b0-42bd-b728-70b51c5577c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:29.777 05:11:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:30.036 05:11:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:30.036 05:11:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:30.036 05:11:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:30.036 05:11:59 -- bdev/blockdev.sh@752 -- # killprocess 148494 00:29:30.036 05:11:59 -- common/autotest_common.sh@926 -- # '[' -z 148494 ']' 00:29:30.036 05:11:59 -- common/autotest_common.sh@930 -- # kill -0 148494 00:29:30.036 05:11:59 -- common/autotest_common.sh@931 -- # uname 00:29:30.036 05:11:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:30.036 05:11:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148494 00:29:30.036 05:11:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:30.036 05:11:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:30.036 killing process with pid 148494 00:29:30.036 05:11:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148494' 00:29:30.036 05:11:59 -- common/autotest_common.sh@945 -- # kill 148494 00:29:30.036 05:11:59 -- common/autotest_common.sh@950 -- # wait 148494 00:29:30.603 05:12:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:30.603 05:12:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:30.603 05:12:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:30.603 05:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.603 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.603 ************************************ 00:29:30.603 START TEST bdev_hello_world 00:29:30.603 ************************************ 00:29:30.603 05:12:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:30.603 [2024-04-27 05:12:00.450544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:30.603 [2024-04-27 05:12:00.450966] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148562 ] 00:29:30.861 [2024-04-27 05:12:00.619002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.862 [2024-04-27 05:12:00.693400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.121 [2024-04-27 05:12:00.941818] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:31.121 [2024-04-27 05:12:00.941911] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:31.121 [2024-04-27 05:12:00.941977] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:31.121 [2024-04-27 05:12:00.944535] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:31.121 [2024-04-27 05:12:00.945113] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:31.121 [2024-04-27 05:12:00.945189] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:31.121 [2024-04-27 05:12:00.945472] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:31.121 00:29:31.121 [2024-04-27 05:12:00.945536] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:31.380 00:29:31.380 real 0m0.891s 00:29:31.380 user 0m0.515s 00:29:31.380 sys 0m0.276s 00:29:31.380 05:12:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.380 05:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.380 ************************************ 00:29:31.380 END TEST bdev_hello_world 00:29:31.380 ************************************ 00:29:31.639 05:12:01 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:31.639 05:12:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:31.639 05:12:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:31.639 05:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.639 ************************************ 00:29:31.639 START TEST bdev_bounds 00:29:31.639 ************************************ 00:29:31.639 05:12:01 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:31.639 05:12:01 -- bdev/blockdev.sh@288 -- # bdevio_pid=148591 00:29:31.639 05:12:01 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:31.639 05:12:01 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:31.639 Process bdevio pid: 148591 00:29:31.639 05:12:01 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 148591' 00:29:31.639 05:12:01 -- bdev/blockdev.sh@291 -- # waitforlisten 148591 00:29:31.639 05:12:01 -- common/autotest_common.sh@819 -- # '[' -z 148591 ']' 00:29:31.639 05:12:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.639 05:12:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.639 05:12:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.639 05:12:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.639 05:12:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.639 [2024-04-27 05:12:01.389456] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:31.639 [2024-04-27 05:12:01.389693] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148591 ] 00:29:31.898 [2024-04-27 05:12:01.566695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.898 [2024-04-27 05:12:01.651145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.898 [2024-04-27 05:12:01.651271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.898 [2024-04-27 05:12:01.651269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.465 05:12:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.465 05:12:02 -- common/autotest_common.sh@852 -- # return 0 00:29:32.465 05:12:02 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:32.725 I/O targets: 00:29:32.725 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:32.725 00:29:32.725 00:29:32.725 CUnit - A unit testing framework for C - Version 2.1-3 00:29:32.725 http://cunit.sourceforge.net/ 00:29:32.725 00:29:32.725 00:29:32.725 Suite: bdevio tests on: Nvme0n1 00:29:32.725 Test: blockdev write read block ...passed 00:29:32.725 Test: blockdev write zeroes read block ...passed 00:29:32.725 Test: blockdev write zeroes read no split ...passed 00:29:32.725 Test: blockdev write zeroes read split ...passed 00:29:32.725 Test: blockdev write zeroes read split partial ...passed 00:29:32.725 Test: blockdev reset ...[2024-04-27 05:12:02.463969] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:32.725 [2024-04-27 05:12:02.466369] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.725 passed 00:29:32.725 Test: blockdev write read 8 blocks ...passed 00:29:32.725 Test: blockdev write read size > 128k ...passed 00:29:32.725 Test: blockdev write read invalid size ...passed 00:29:32.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:32.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:32.725 Test: blockdev write read max offset ...passed 00:29:32.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:32.725 Test: blockdev writev readv 8 blocks ...passed 00:29:32.725 Test: blockdev writev readv 30 x 1block ...passed 00:29:32.725 Test: blockdev writev readv block ...passed 00:29:32.725 Test: blockdev writev readv size > 128k ...passed 00:29:32.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:32.725 Test: blockdev comparev and writev ...[2024-04-27 05:12:02.477181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x7a20d000 len:0x1000 00:29:32.725 [2024-04-27 05:12:02.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:32.725 passed 00:29:32.725 Test: blockdev nvme passthru rw ...passed 00:29:32.725 Test: blockdev nvme passthru vendor specific ...[2024-04-27 05:12:02.478387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:32.725 [2024-04-27 05:12:02.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:32.725 passed 00:29:32.725 Test: blockdev nvme admin passthru ...passed 00:29:32.725 Test: blockdev copy ...passed 00:29:32.725 00:29:32.725 Run Summary: Type Total Ran Passed Failed Inactive 00:29:32.725 suites 1 1 n/a 0 0 00:29:32.725 tests 23 23 23 0 0 00:29:32.725 asserts 152 152 152 0 n/a 00:29:32.725 00:29:32.725 Elapsed time = 0.087 seconds 00:29:32.725 0 00:29:32.725 05:12:02 -- bdev/blockdev.sh@293 -- # killprocess 148591 00:29:32.725 05:12:02 -- common/autotest_common.sh@926 -- # '[' -z 148591 ']' 00:29:32.725 05:12:02 -- common/autotest_common.sh@930 -- # kill -0 148591 00:29:32.725 05:12:02 -- common/autotest_common.sh@931 -- # uname 00:29:32.725 05:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:32.725 05:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148591 00:29:32.725 05:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:32.725 05:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:32.725 killing process with pid 148591 00:29:32.725 05:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148591' 00:29:32.725 05:12:02 -- common/autotest_common.sh@945 -- # kill 148591 00:29:32.725 05:12:02 -- common/autotest_common.sh@950 -- # wait 148591 00:29:32.984 05:12:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:32.984 00:29:32.984 real 0m1.529s 00:29:32.984 user 0m3.731s 00:29:32.984 sys 0m0.398s 00:29:32.984 05:12:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.984 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:32.984 ************************************ 00:29:32.984 END TEST bdev_bounds 00:29:32.984 ************************************ 00:29:33.242 05:12:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:33.242 05:12:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:33.242 05:12:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:33.242 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.242 ************************************ 00:29:33.242 START TEST bdev_nbd 00:29:33.242 ************************************ 00:29:33.242 05:12:02 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:33.242 05:12:02 -- bdev/blockdev.sh@298 -- # uname -s 00:29:33.242 05:12:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:33.242 05:12:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.242 05:12:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:33.242 05:12:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:29:33.242 05:12:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:33.242 05:12:02 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:33.242 05:12:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:33.242 05:12:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:33.242 05:12:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:33.242 05:12:02 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:33.242 05:12:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:33.242 05:12:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:33.242 05:12:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:29:33.242 05:12:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:33.242 05:12:02 -- bdev/blockdev.sh@316 -- # nbd_pid=148648 00:29:33.242 05:12:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:33.242 05:12:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:33.242 05:12:02 -- bdev/blockdev.sh@318 -- # waitforlisten 148648 /var/tmp/spdk-nbd.sock 00:29:33.242 05:12:02 -- common/autotest_common.sh@819 -- # '[' -z 148648 ']' 00:29:33.242 05:12:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:33.242 05:12:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:33.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:33.242 05:12:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:33.242 05:12:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:33.242 05:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:33.242 [2024-04-27 05:12:02.984338] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:33.242 [2024-04-27 05:12:02.984543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.242 [2024-04-27 05:12:03.141241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.500 [2024-04-27 05:12:03.229657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.067 05:12:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:34.067 05:12:03 -- common/autotest_common.sh@852 -- # return 0 00:29:34.067 05:12:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@24 -- # local i 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:34.067 05:12:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:34.325 05:12:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:34.325 05:12:04 -- common/autotest_common.sh@857 -- # local i 00:29:34.325 05:12:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:34.325 05:12:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:34.325 05:12:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:34.325 05:12:04 -- common/autotest_common.sh@861 -- # break 00:29:34.325 05:12:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:34.325 05:12:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:34.325 05:12:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.325 1+0 records in 00:29:34.325 1+0 records out 00:29:34.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101088 s, 4.1 MB/s 00:29:34.325 05:12:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.325 05:12:04 -- common/autotest_common.sh@874 -- # size=4096 00:29:34.325 05:12:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.325 05:12:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:34.325 05:12:04 -- common/autotest_common.sh@877 -- # return 0 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:34.325 05:12:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:34.583 { 00:29:34.583 "nbd_device": "/dev/nbd0", 00:29:34.583 "bdev_name": "Nvme0n1" 00:29:34.583 } 00:29:34.583 ]' 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:34.583 { 00:29:34.583 "nbd_device": "/dev/nbd0", 00:29:34.583 "bdev_name": "Nvme0n1" 00:29:34.583 } 00:29:34.583 ]' 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@51 -- # local i 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.583 05:12:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@41 -- # break 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.852 05:12:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@65 -- # true 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@65 -- # count=0 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@122 -- # count=0 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@127 -- # return 0 00:29:35.124 05:12:04 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@12 -- # local i 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.124 05:12:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:35.382 /dev/nbd0 00:29:35.382 05:12:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:35.382 05:12:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:35.382 05:12:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:35.382 05:12:05 -- common/autotest_common.sh@857 -- # local i 00:29:35.382 05:12:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:35.382 05:12:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:35.382 05:12:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:35.382 05:12:05 -- common/autotest_common.sh@861 -- # break 00:29:35.382 05:12:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:35.383 05:12:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:35.383 05:12:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.383 1+0 records in 00:29:35.383 1+0 records out 00:29:35.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994554 s, 4.1 MB/s 00:29:35.383 05:12:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.383 05:12:05 -- common/autotest_common.sh@874 -- # size=4096 00:29:35.383 05:12:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.383 05:12:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:35.383 05:12:05 -- common/autotest_common.sh@877 -- # return 0 00:29:35.383 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.383 05:12:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.383 05:12:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:35.383 05:12:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.383 05:12:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:35.642 { 00:29:35.642 "nbd_device": "/dev/nbd0", 00:29:35.642 "bdev_name": "Nvme0n1" 00:29:35.642 } 00:29:35.642 ]' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:35.642 { 00:29:35.642 "nbd_device": "/dev/nbd0", 00:29:35.642 "bdev_name": "Nvme0n1" 00:29:35.642 } 00:29:35.642 ]' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@65 -- # count=1 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@95 -- # count=1 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:35.642 256+0 records in 00:29:35.642 256+0 records out 00:29:35.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.008546 s, 123 MB/s 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:35.642 256+0 records in 00:29:35.642 256+0 records out 00:29:35.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707304 s, 14.8 MB/s 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@51 -- # local i 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:35.642 05:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:35.900 05:12:05 -- bdev/nbd_common.sh@41 -- # break 00:29:36.159 05:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.159 05:12:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:36.159 05:12:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.159 05:12:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@65 -- # true 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@65 -- # count=0 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@104 -- # count=0 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@109 -- # return 0 00:29:36.418 05:12:06 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:36.418 05:12:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:36.676 malloc_lvol_verify 00:29:36.676 05:12:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:36.934 80a162a5-bc82-4e57-b2a8-a1718cdb07f3 00:29:36.934 05:12:06 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:36.934 8f32a396-e8f0-451c-8e07-3b1bd7b65b58 00:29:37.192 05:12:06 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:37.192 /dev/nbd0 00:29:37.192 05:12:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:37.192 mke2fs 1.46.5 (30-Dec-2021) 00:29:37.192 00:29:37.192 Filesystem too small for a journal 00:29:37.192 Discarding device blocks: 0/1024 done 00:29:37.192 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:37.192 00:29:37.192 Allocating group tables: 0/1 done 00:29:37.192 Writing inode tables: 0/1 done 00:29:37.192 Writing superblocks and filesystem accounting information: 0/1 done 00:29:37.192 00:29:37.192 05:12:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:37.192 05:12:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:37.192 05:12:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.192 05:12:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@51 -- # local i 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@41 -- # break 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:37.450 05:12:07 -- bdev/nbd_common.sh@147 -- # return 0 00:29:37.450 05:12:07 -- bdev/blockdev.sh@324 -- # killprocess 148648 00:29:37.450 05:12:07 -- common/autotest_common.sh@926 -- # '[' -z 148648 ']' 00:29:37.450 05:12:07 -- common/autotest_common.sh@930 -- # kill -0 148648 00:29:37.450 05:12:07 -- common/autotest_common.sh@931 -- # uname 00:29:37.450 05:12:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:37.450 05:12:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148648 00:29:37.450 05:12:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:37.450 05:12:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:37.450 05:12:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148648' 00:29:37.450 killing process with pid 148648 00:29:37.450 05:12:07 -- common/autotest_common.sh@945 -- # kill 148648 00:29:37.450 05:12:07 -- common/autotest_common.sh@950 -- # wait 148648 00:29:38.016 05:12:07 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:38.016 00:29:38.016 real 0m4.816s 00:29:38.016 user 0m7.291s 00:29:38.016 sys 0m1.055s 00:29:38.016 05:12:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:29:38.016 ************************************ 00:29:38.016 END TEST bdev_nbd 00:29:38.016 ************************************ 00:29:38.016 05:12:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:38.016 05:12:07 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:38.016 05:12:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:38.016 skipping fio tests on NVMe due to multi-ns failures. 00:29:38.016 05:12:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:38.016 05:12:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:38.016 05:12:07 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:38.016 05:12:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:38.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:29:38.016 ************************************ 00:29:38.016 START TEST bdev_verify 00:29:38.016 ************************************ 00:29:38.016 05:12:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:38.016 [2024-04-27 05:12:07.861097] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:38.016 [2024-04-27 05:12:07.861384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148835 ] 00:29:38.275 [2024-04-27 05:12:08.039311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:38.275 [2024-04-27 05:12:08.165878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.275 [2024-04-27 05:12:08.165880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.534 Running I/O for 5 seconds... 00:29:43.800 00:29:43.800 Latency(us) 00:29:43.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.800 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:43.800 Verification LBA range: start 0x0 length 0xa0000 00:29:43.800 Nvme0n1 : 5.01 17599.75 68.75 0.00 0.00 7239.91 569.72 24069.59 00:29:43.800 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:43.800 Verification LBA range: start 0xa0000 length 0xa0000 00:29:43.800 Nvme0n1 : 5.01 17575.94 68.66 0.00 0.00 7250.03 562.27 22401.40 00:29:43.800 =================================================================================================================== 00:29:43.800 Total : 35175.68 137.41 0.00 0.00 7244.96 562.27 24069.59 00:29:51.969 00:29:51.969 real 0m13.755s 00:29:51.969 user 0m26.443s 00:29:51.969 sys 0m0.439s 00:29:51.969 05:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.969 05:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:51.969 ************************************ 00:29:51.969 END TEST bdev_verify 00:29:51.969 ************************************ 00:29:51.969 05:12:21 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:51.969 05:12:21 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:51.969 05:12:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.969 05:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:51.969 ************************************ 00:29:51.969 START TEST bdev_verify_big_io 00:29:51.969 ************************************ 00:29:51.969 05:12:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:51.969 [2024-04-27 05:12:21.663243] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:51.969 [2024-04-27 05:12:21.663459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148965 ] 00:29:51.969 [2024-04-27 05:12:21.825006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.228 [2024-04-27 05:12:21.936107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.229 [2024-04-27 05:12:21.936104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.488 Running I/O for 5 seconds... 00:29:57.755 00:29:57.755 Latency(us) 00:29:57.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:57.755 Verification LBA range: start 0x0 length 0xa000 00:29:57.755 Nvme0n1 : 5.03 1828.23 114.26 0.00 0.00 69042.65 476.63 104857.60 00:29:57.755 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:57.755 Verification LBA range: start 0xa000 length 0xa000 00:29:57.755 Nvme0n1 : 5.04 1767.21 110.45 0.00 0.00 71403.44 547.37 112483.61 00:29:57.755 =================================================================================================================== 00:29:57.755 Total : 3595.44 224.71 0.00 0.00 70203.48 476.63 112483.61 00:29:58.323 00:29:58.323 real 0m6.336s 00:29:58.323 user 0m11.810s 00:29:58.323 sys 0m0.280s 00:29:58.323 ************************************ 00:29:58.323 END TEST bdev_verify_big_io 00:29:58.323 ************************************ 00:29:58.323 05:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.323 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:29:58.323 05:12:27 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:58.323 05:12:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:58.323 05:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:58.323 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:29:58.323 ************************************ 00:29:58.323 START TEST bdev_write_zeroes 00:29:58.323 ************************************ 00:29:58.323 05:12:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:58.323 [2024-04-27 05:12:28.066740] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:58.323 [2024-04-27 05:12:28.066987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149053 ] 00:29:58.323 [2024-04-27 05:12:28.236701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.581 [2024-04-27 05:12:28.359160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.839 Running I/O for 1 seconds... 00:29:59.773 00:29:59.773 Latency(us) 00:29:59.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.773 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:59.773 Nvme0n1 : 1.01 49247.72 192.37 0.00 0.00 2588.31 848.99 17992.61 00:29:59.773 =================================================================================================================== 00:29:59.773 Total : 49247.72 192.37 0.00 0.00 2588.31 848.99 17992.61 00:30:00.340 00:30:00.340 real 0m1.986s 00:30:00.340 user 0m1.614s 00:30:00.340 sys 0m0.272s 00:30:00.340 05:12:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.340 05:12:29 -- common/autotest_common.sh@10 -- # set +x 00:30:00.340 ************************************ 00:30:00.340 END TEST bdev_write_zeroes 00:30:00.340 ************************************ 00:30:00.340 05:12:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:00.340 05:12:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:00.340 05:12:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.340 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:30:00.340 ************************************ 00:30:00.340 START TEST bdev_json_nonenclosed 00:30:00.340 ************************************ 00:30:00.340 05:12:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:00.340 [2024-04-27 05:12:30.111714] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:00.340 [2024-04-27 05:12:30.111981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149103 ] 00:30:00.597 [2024-04-27 05:12:30.281257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.597 [2024-04-27 05:12:30.382141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.597 [2024-04-27 05:12:30.382395] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:00.597 [2024-04-27 05:12:30.382456] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:00.597 00:30:00.597 real 0m0.463s 00:30:00.597 user 0m0.207s 00:30:00.597 sys 0m0.156s 00:30:00.597 05:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.597 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:30:00.597 ************************************ 00:30:00.598 END TEST bdev_json_nonenclosed 00:30:00.598 ************************************ 00:30:00.856 05:12:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:00.856 05:12:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:00.856 05:12:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.856 05:12:30 -- common/autotest_common.sh@10 -- # set +x 00:30:00.856 ************************************ 00:30:00.856 START TEST bdev_json_nonarray 00:30:00.856 ************************************ 00:30:00.856 05:12:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:00.856 [2024-04-27 05:12:30.628870] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:00.856 [2024-04-27 05:12:30.629173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149136 ] 00:30:01.115 [2024-04-27 05:12:30.800759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.115 [2024-04-27 05:12:30.924551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.115 [2024-04-27 05:12:30.924855] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:01.115 [2024-04-27 05:12:30.924955] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:01.386 00:30:01.386 real 0m0.532s 00:30:01.386 user 0m0.268s 00:30:01.386 sys 0m0.164s 00:30:01.386 05:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.386 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.386 ************************************ 00:30:01.386 END TEST bdev_json_nonarray 00:30:01.386 ************************************ 00:30:01.386 05:12:31 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:01.386 05:12:31 -- bdev/blockdev.sh@809 -- # cleanup 00:30:01.386 05:12:31 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:01.386 05:12:31 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:01.386 05:12:31 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:30:01.386 ************************************ 00:30:01.386 END TEST blockdev_nvme 00:30:01.386 ************************************ 00:30:01.386 00:30:01.386 real 0m32.779s 00:30:01.386 user 0m54.051s 00:30:01.386 sys 0m3.901s 00:30:01.386 05:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.386 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.386 05:12:31 -- spdk/autotest.sh@219 -- # uname -s 00:30:01.386 05:12:31 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:30:01.386 05:12:31 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:01.386 05:12:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:01.386 05:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:01.386 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.386 ************************************ 00:30:01.386 START TEST blockdev_nvme_gpt 00:30:01.386 ************************************ 00:30:01.386 05:12:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:30:01.386 * Looking for test storage... 00:30:01.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:01.386 05:12:31 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:01.386 05:12:31 -- bdev/nbd_common.sh@6 -- # set -e 00:30:01.386 05:12:31 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:01.386 05:12:31 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:01.386 05:12:31 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:01.386 05:12:31 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:01.386 05:12:31 -- bdev/blockdev.sh@18 -- # : 00:30:01.386 05:12:31 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:30:01.386 05:12:31 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:30:01.386 05:12:31 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:30:01.386 05:12:31 -- bdev/blockdev.sh@672 -- # uname -s 00:30:01.386 05:12:31 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:30:01.386 05:12:31 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:30:01.386 05:12:31 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:30:01.386 05:12:31 -- bdev/blockdev.sh@681 -- # crypto_device= 00:30:01.386 05:12:31 -- bdev/blockdev.sh@682 -- # dek= 00:30:01.386 05:12:31 -- bdev/blockdev.sh@683 -- # env_ctx= 00:30:01.386 05:12:31 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:30:01.386 05:12:31 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:30:01.386 05:12:31 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:30:01.386 05:12:31 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:30:01.386 05:12:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149210 00:30:01.386 05:12:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:01.386 05:12:31 -- bdev/blockdev.sh@47 -- # waitforlisten 149210 00:30:01.386 05:12:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:01.386 05:12:31 -- common/autotest_common.sh@819 -- # '[' -z 149210 ']' 00:30:01.656 05:12:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.656 05:12:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:01.656 05:12:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.656 05:12:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:01.656 05:12:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.656 [2024-04-27 05:12:31.374669] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:01.656 [2024-04-27 05:12:31.374956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149210 ] 00:30:01.656 [2024-04-27 05:12:31.545647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.915 [2024-04-27 05:12:31.667441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:01.915 [2024-04-27 05:12:31.667744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.481 05:12:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:02.481 05:12:32 -- common/autotest_common.sh@852 -- # return 0 00:30:02.481 05:12:32 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:30:02.481 05:12:32 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:30:02.482 05:12:32 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:02.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:02.740 Waiting for block devices as requested 00:30:02.999 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:02.999 05:12:32 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:30:02.999 05:12:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:30:02.999 05:12:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:30:02.999 05:12:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:30:02.999 05:12:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:30:02.999 05:12:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:30:02.999 05:12:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:30:02.999 05:12:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:02.999 05:12:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:30:02.999 05:12:32 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:30:02.999 05:12:32 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:30:02.999 05:12:32 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:30:02.999 05:12:32 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:30:02.999 05:12:32 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:30:02.999 05:12:32 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:30:02.999 05:12:32 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:30:02.999 05:12:32 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:30:02.999 BYT; 00:30:03.000 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:30:03.000 05:12:32 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:30:03.000 BYT; 00:30:03.000 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:30:03.000 05:12:32 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:30:03.000 05:12:32 -- bdev/blockdev.sh@114 -- # break 00:30:03.000 05:12:32 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:30:03.000 05:12:32 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:30:03.000 05:12:32 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:03.000 05:12:32 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:30:03.257 05:12:33 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:30:03.257 05:12:33 -- scripts/common.sh@410 -- # local spdk_guid 00:30:03.257 05:12:33 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:03.258 05:12:33 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:03.258 05:12:33 -- scripts/common.sh@415 -- # IFS='()' 00:30:03.258 05:12:33 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:30:03.258 05:12:33 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:03.258 05:12:33 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:30:03.258 05:12:33 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:03.258 05:12:33 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:03.258 05:12:33 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:03.258 05:12:33 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:30:03.258 05:12:33 -- scripts/common.sh@422 -- # local spdk_guid 00:30:03.258 05:12:33 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:03.258 05:12:33 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:03.258 05:12:33 -- scripts/common.sh@427 -- # IFS='()' 00:30:03.258 05:12:33 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:30:03.258 05:12:33 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:03.258 05:12:33 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:30:03.258 05:12:33 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:03.258 05:12:33 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:03.258 05:12:33 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:03.258 05:12:33 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:30:04.634 The operation has completed successfully. 00:30:04.634 05:12:34 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:30:05.569 The operation has completed successfully. 00:30:05.569 05:12:35 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:05.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:05.828 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:07.204 05:12:36 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:30:07.204 05:12:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.204 05:12:36 -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 [] 00:30:07.204 05:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.204 05:12:36 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:30:07.204 05:12:36 -- bdev/blockdev.sh@79 -- # local json 00:30:07.204 05:12:36 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:07.204 05:12:36 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:07.204 05:12:37 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:07.204 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.204 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.204 05:12:37 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:07.204 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.204 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.204 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.204 05:12:37 -- bdev/blockdev.sh@738 -- # cat 00:30:07.463 05:12:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:07.463 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.463 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.463 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.463 05:12:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:07.463 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.463 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.463 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.463 05:12:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:07.463 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.463 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.463 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.463 05:12:37 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:07.463 05:12:37 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:07.463 05:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:07.463 05:12:37 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:07.463 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:07.463 05:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:07.463 05:12:37 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:07.463 05:12:37 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:30:07.463 05:12:37 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:07.463 05:12:37 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:07.463 05:12:37 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:30:07.463 05:12:37 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:07.463 05:12:37 -- bdev/blockdev.sh@752 -- # killprocess 149210 00:30:07.463 05:12:37 -- common/autotest_common.sh@926 -- # '[' -z 149210 ']' 00:30:07.463 05:12:37 -- common/autotest_common.sh@930 -- # kill -0 149210 00:30:07.463 05:12:37 -- common/autotest_common.sh@931 -- # uname 00:30:07.463 05:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:07.463 05:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149210 00:30:07.463 05:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:07.463 05:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:07.463 killing process with pid 149210 00:30:07.463 05:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149210' 00:30:07.463 05:12:37 -- common/autotest_common.sh@945 -- # kill 149210 00:30:07.463 05:12:37 -- common/autotest_common.sh@950 -- # wait 149210 00:30:08.397 05:12:37 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:08.397 05:12:37 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:08.397 05:12:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:08.397 05:12:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.397 05:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:08.397 ************************************ 00:30:08.397 START TEST bdev_hello_world 00:30:08.397 ************************************ 00:30:08.397 05:12:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:30:08.397 [2024-04-27 05:12:38.039333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:08.397 [2024-04-27 05:12:38.039637] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149628 ] 00:30:08.397 [2024-04-27 05:12:38.208475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.655 [2024-04-27 05:12:38.325811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.913 [2024-04-27 05:12:38.580646] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:08.913 [2024-04-27 05:12:38.580753] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:30:08.913 [2024-04-27 05:12:38.580817] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:08.913 [2024-04-27 05:12:38.583515] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:08.913 [2024-04-27 05:12:38.584151] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:08.913 [2024-04-27 05:12:38.584252] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:08.913 [2024-04-27 05:12:38.584556] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:08.913 00:30:08.913 [2024-04-27 05:12:38.584629] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:09.172 00:30:09.172 real 0m0.988s 00:30:09.172 user 0m0.608s 00:30:09.172 sys 0m0.280s 00:30:09.172 05:12:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.172 05:12:38 -- common/autotest_common.sh@10 -- # set +x 00:30:09.173 ************************************ 00:30:09.173 END TEST bdev_hello_world 00:30:09.173 ************************************ 00:30:09.173 05:12:39 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:09.173 05:12:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:09.173 05:12:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:09.173 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:30:09.173 ************************************ 00:30:09.173 START TEST bdev_bounds 00:30:09.173 ************************************ 00:30:09.173 05:12:39 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:30:09.173 05:12:39 -- bdev/blockdev.sh@288 -- # bdevio_pid=149666 00:30:09.173 05:12:39 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:09.173 Process bdevio pid: 149666 00:30:09.173 05:12:39 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 149666' 00:30:09.173 05:12:39 -- bdev/blockdev.sh@291 -- # waitforlisten 149666 00:30:09.173 05:12:39 -- common/autotest_common.sh@819 -- # '[' -z 149666 ']' 00:30:09.173 05:12:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.173 05:12:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.173 05:12:39 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:09.173 05:12:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.173 05:12:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.173 05:12:39 -- common/autotest_common.sh@10 -- # set +x 00:30:09.173 [2024-04-27 05:12:39.088359] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:09.173 [2024-04-27 05:12:39.088939] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149666 ] 00:30:09.460 [2024-04-27 05:12:39.271651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:09.718 [2024-04-27 05:12:39.408153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.718 [2024-04-27 05:12:39.408350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.718 [2024-04-27 05:12:39.408358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.284 05:12:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:10.284 05:12:40 -- common/autotest_common.sh@852 -- # return 0 00:30:10.284 05:12:40 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:10.284 I/O targets: 00:30:10.284 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:10.284 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:10.284 00:30:10.284 00:30:10.284 CUnit - A unit testing framework for C - Version 2.1-3 00:30:10.284 http://cunit.sourceforge.net/ 00:30:10.284 00:30:10.284 00:30:10.284 Suite: bdevio tests on: Nvme0n1p2 00:30:10.284 Test: blockdev write read block ...passed 00:30:10.284 Test: blockdev write zeroes read block ...passed 00:30:10.284 Test: blockdev write zeroes read no split ...passed 00:30:10.284 Test: blockdev write zeroes read split ...passed 00:30:10.284 Test: blockdev write zeroes read split partial ...passed 00:30:10.284 Test: blockdev reset ...[2024-04-27 05:12:40.141844] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:10.284 [2024-04-27 05:12:40.144107] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.284 passed 00:30:10.284 Test: blockdev write read 8 blocks ...passed 00:30:10.284 Test: blockdev write read size > 128k ...passed 00:30:10.284 Test: blockdev write read invalid size ...passed 00:30:10.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:10.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:10.284 Test: blockdev write read max offset ...passed 00:30:10.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:10.284 Test: blockdev writev readv 8 blocks ...passed 00:30:10.284 Test: blockdev writev readv 30 x 1block ...passed 00:30:10.284 Test: blockdev writev readv block ...passed 00:30:10.284 Test: blockdev writev readv size > 128k ...passed 00:30:10.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:10.284 Test: blockdev comparev and writev ...[2024-04-27 05:12:40.150606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x8be0b000 len:0x1000 00:30:10.284 [2024-04-27 05:12:40.150711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:10.284 passed 00:30:10.284 Test: blockdev nvme passthru rw ...passed 00:30:10.284 Test: blockdev nvme passthru vendor specific ...passed 00:30:10.284 Test: blockdev nvme admin passthru ...passed 00:30:10.284 Test: blockdev copy ...passed 00:30:10.284 Suite: bdevio tests on: Nvme0n1p1 00:30:10.284 Test: blockdev write read block ...passed 00:30:10.284 Test: blockdev write zeroes read block ...passed 00:30:10.284 Test: blockdev write zeroes read no split ...passed 00:30:10.284 Test: blockdev write zeroes read split ...passed 00:30:10.284 Test: blockdev write zeroes read split partial ...passed 00:30:10.284 Test: blockdev reset ...[2024-04-27 05:12:40.165770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:10.284 [2024-04-27 05:12:40.167813] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.284 passed 00:30:10.284 Test: blockdev write read 8 blocks ...passed 00:30:10.284 Test: blockdev write read size > 128k ...passed 00:30:10.284 Test: blockdev write read invalid size ...passed 00:30:10.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:10.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:10.284 Test: blockdev write read max offset ...passed 00:30:10.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:10.284 Test: blockdev writev readv 8 blocks ...passed 00:30:10.284 Test: blockdev writev readv 30 x 1block ...passed 00:30:10.284 Test: blockdev writev readv block ...passed 00:30:10.284 Test: blockdev writev readv size > 128k ...passed 00:30:10.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:10.284 Test: blockdev comparev and writev ...[2024-04-27 05:12:40.174615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x8be0d000 len:0x1000 00:30:10.284 [2024-04-27 05:12:40.174697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:10.284 passed 00:30:10.284 Test: blockdev nvme passthru rw ...passed 00:30:10.284 Test: blockdev nvme passthru vendor specific ...passed 00:30:10.284 Test: blockdev nvme admin passthru ...passed 00:30:10.284 Test: blockdev copy ...passed 00:30:10.284 00:30:10.284 Run Summary: Type Total Ran Passed Failed Inactive 00:30:10.284 suites 2 2 n/a 0 0 00:30:10.284 tests 46 46 46 0 0 00:30:10.284 asserts 284 284 284 0 n/a 00:30:10.284 00:30:10.284 Elapsed time = 0.113 seconds 00:30:10.284 0 00:30:10.284 05:12:40 -- bdev/blockdev.sh@293 -- # killprocess 149666 00:30:10.284 05:12:40 -- common/autotest_common.sh@926 -- # '[' -z 149666 ']' 00:30:10.284 05:12:40 -- common/autotest_common.sh@930 -- # kill -0 149666 00:30:10.284 05:12:40 -- common/autotest_common.sh@931 -- # uname 00:30:10.284 05:12:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:10.284 05:12:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149666 00:30:10.542 05:12:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:10.543 killing process with pid 149666 00:30:10.543 05:12:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:10.543 05:12:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149666' 00:30:10.543 05:12:40 -- common/autotest_common.sh@945 -- # kill 149666 00:30:10.543 05:12:40 -- common/autotest_common.sh@950 -- # wait 149666 00:30:10.802 05:12:40 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:10.802 00:30:10.802 real 0m1.530s 00:30:10.802 user 0m3.562s 00:30:10.802 sys 0m0.405s 00:30:10.802 05:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.802 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:10.802 ************************************ 00:30:10.802 END TEST bdev_bounds 00:30:10.802 ************************************ 00:30:10.802 05:12:40 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:10.802 05:12:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:10.802 05:12:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:10.802 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:10.802 ************************************ 00:30:10.802 START TEST bdev_nbd 00:30:10.802 ************************************ 00:30:10.802 05:12:40 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:10.802 05:12:40 -- bdev/blockdev.sh@298 -- # uname -s 00:30:10.802 05:12:40 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:10.802 05:12:40 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.803 05:12:40 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:10.803 05:12:40 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:30:10.803 05:12:40 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:10.803 05:12:40 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:30:10.803 05:12:40 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:10.803 05:12:40 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:10.803 05:12:40 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:10.803 05:12:40 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:30:10.803 05:12:40 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.803 05:12:40 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:10.803 05:12:40 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:10.803 05:12:40 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:10.803 05:12:40 -- bdev/blockdev.sh@316 -- # nbd_pid=149723 00:30:10.803 05:12:40 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:10.803 05:12:40 -- bdev/blockdev.sh@318 -- # waitforlisten 149723 /var/tmp/spdk-nbd.sock 00:30:10.803 05:12:40 -- common/autotest_common.sh@819 -- # '[' -z 149723 ']' 00:30:10.803 05:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:10.803 05:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.803 05:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:10.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:10.803 05:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.803 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:10.803 05:12:40 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:10.803 [2024-04-27 05:12:40.672716] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:10.803 [2024-04-27 05:12:40.673268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.061 [2024-04-27 05:12:40.836411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.061 [2024-04-27 05:12:40.963246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.997 05:12:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.997 05:12:41 -- common/autotest_common.sh@852 -- # return 0 00:30:11.997 05:12:41 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@24 -- # local i 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:11.997 05:12:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:11.997 05:12:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:11.997 05:12:41 -- common/autotest_common.sh@857 -- # local i 00:30:11.997 05:12:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:11.997 05:12:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:11.997 05:12:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:11.997 05:12:41 -- common/autotest_common.sh@861 -- # break 00:30:11.997 05:12:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:11.997 05:12:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:11.997 05:12:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:11.997 1+0 records in 00:30:11.997 1+0 records out 00:30:11.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654359 s, 6.3 MB/s 00:30:11.997 05:12:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.255 05:12:41 -- common/autotest_common.sh@874 -- # size=4096 00:30:12.255 05:12:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.256 05:12:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:12.256 05:12:41 -- common/autotest_common.sh@877 -- # return 0 00:30:12.256 05:12:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:12.256 05:12:41 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:12.256 05:12:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:12.514 05:12:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:12.514 05:12:42 -- common/autotest_common.sh@857 -- # local i 00:30:12.514 05:12:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:12.514 05:12:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:12.514 05:12:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:12.514 05:12:42 -- common/autotest_common.sh@861 -- # break 00:30:12.514 05:12:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:12.514 05:12:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:12.514 05:12:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:12.514 1+0 records in 00:30:12.514 1+0 records out 00:30:12.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807816 s, 5.1 MB/s 00:30:12.514 05:12:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.514 05:12:42 -- common/autotest_common.sh@874 -- # size=4096 00:30:12.514 05:12:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.514 05:12:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:12.514 05:12:42 -- common/autotest_common.sh@877 -- # return 0 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:12.514 05:12:42 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:12.773 { 00:30:12.773 "nbd_device": "/dev/nbd0", 00:30:12.773 "bdev_name": "Nvme0n1p1" 00:30:12.773 }, 00:30:12.773 { 00:30:12.773 "nbd_device": "/dev/nbd1", 00:30:12.773 "bdev_name": "Nvme0n1p2" 00:30:12.773 } 00:30:12.773 ]' 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:12.773 { 00:30:12.773 "nbd_device": "/dev/nbd0", 00:30:12.773 "bdev_name": "Nvme0n1p1" 00:30:12.773 }, 00:30:12.773 { 00:30:12.773 "nbd_device": "/dev/nbd1", 00:30:12.773 "bdev_name": "Nvme0n1p2" 00:30:12.773 } 00:30:12.773 ]' 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@51 -- # local i 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:12.773 05:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@41 -- # break 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@45 -- # return 0 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:13.031 05:12:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@41 -- # break 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@45 -- # return 0 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:13.289 05:12:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@65 -- # true 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@65 -- # count=0 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:13.547 05:12:43 -- bdev/nbd_common.sh@122 -- # count=0 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@127 -- # return 0 00:30:13.548 05:12:43 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@12 -- # local i 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.548 05:12:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:13.806 /dev/nbd0 00:30:13.806 05:12:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:13.806 05:12:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:13.806 05:12:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:13.806 05:12:43 -- common/autotest_common.sh@857 -- # local i 00:30:13.806 05:12:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:13.806 05:12:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:13.806 05:12:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:13.806 05:12:43 -- common/autotest_common.sh@861 -- # break 00:30:13.806 05:12:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:13.806 05:12:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:13.806 05:12:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:13.806 1+0 records in 00:30:13.806 1+0 records out 00:30:13.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396267 s, 10.3 MB/s 00:30:13.806 05:12:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.806 05:12:43 -- common/autotest_common.sh@874 -- # size=4096 00:30:13.806 05:12:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.806 05:12:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:13.806 05:12:43 -- common/autotest_common.sh@877 -- # return 0 00:30:13.806 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:13.806 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.806 05:12:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:14.065 /dev/nbd1 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:14.065 05:12:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:14.065 05:12:43 -- common/autotest_common.sh@857 -- # local i 00:30:14.065 05:12:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:14.065 05:12:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:14.065 05:12:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:14.065 05:12:43 -- common/autotest_common.sh@861 -- # break 00:30:14.065 05:12:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:14.065 05:12:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:14.065 05:12:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:14.065 1+0 records in 00:30:14.065 1+0 records out 00:30:14.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567249 s, 7.2 MB/s 00:30:14.065 05:12:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.065 05:12:43 -- common/autotest_common.sh@874 -- # size=4096 00:30:14.065 05:12:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.065 05:12:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:14.065 05:12:43 -- common/autotest_common.sh@877 -- # return 0 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:14.065 05:12:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:14.324 { 00:30:14.324 "nbd_device": "/dev/nbd0", 00:30:14.324 "bdev_name": "Nvme0n1p1" 00:30:14.324 }, 00:30:14.324 { 00:30:14.324 "nbd_device": "/dev/nbd1", 00:30:14.324 "bdev_name": "Nvme0n1p2" 00:30:14.324 } 00:30:14.324 ]' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:14.324 { 00:30:14.324 "nbd_device": "/dev/nbd0", 00:30:14.324 "bdev_name": "Nvme0n1p1" 00:30:14.324 }, 00:30:14.324 { 00:30:14.324 "nbd_device": "/dev/nbd1", 00:30:14.324 "bdev_name": "Nvme0n1p2" 00:30:14.324 } 00:30:14.324 ]' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:14.324 /dev/nbd1' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:14.324 /dev/nbd1' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@65 -- # count=2 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@95 -- # count=2 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:14.324 256+0 records in 00:30:14.324 256+0 records out 00:30:14.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847997 s, 124 MB/s 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:14.324 05:12:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:14.583 256+0 records in 00:30:14.583 256+0 records out 00:30:14.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0898094 s, 11.7 MB/s 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:14.583 256+0 records in 00:30:14.583 256+0 records out 00:30:14.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102704 s, 10.2 MB/s 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@51 -- # local i 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.583 05:12:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@41 -- # break 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.841 05:12:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@41 -- # break 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@45 -- # return 0 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.100 05:12:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@65 -- # true 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@65 -- # count=0 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@104 -- # count=0 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@109 -- # return 0 00:30:15.667 05:12:45 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:15.667 05:12:45 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:15.925 malloc_lvol_verify 00:30:15.925 05:12:45 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:16.185 eac5d567-d722-46df-a1e5-219c9db7bf56 00:30:16.185 05:12:45 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:16.443 ba29c527-2edb-4044-92a7-cf6962144cd9 00:30:16.443 05:12:46 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:16.701 /dev/nbd0 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:16.701 mke2fs 1.46.5 (30-Dec-2021) 00:30:16.701 00:30:16.701 Filesystem too small for a journal 00:30:16.701 Discarding device blocks: 0/1024 done 00:30:16.701 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:16.701 00:30:16.701 Allocating group tables: 0/1 done 00:30:16.701 Writing inode tables: 0/1 done 00:30:16.701 Writing superblocks and filesystem accounting information: 0/1 done 00:30:16.701 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@51 -- # local i 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.701 05:12:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:16.959 05:12:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:16.959 05:12:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@41 -- # break 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:16.960 05:12:46 -- bdev/nbd_common.sh@147 -- # return 0 00:30:16.960 05:12:46 -- bdev/blockdev.sh@324 -- # killprocess 149723 00:30:16.960 05:12:46 -- common/autotest_common.sh@926 -- # '[' -z 149723 ']' 00:30:16.960 05:12:46 -- common/autotest_common.sh@930 -- # kill -0 149723 00:30:16.960 05:12:46 -- common/autotest_common.sh@931 -- # uname 00:30:16.960 05:12:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:16.960 05:12:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149723 00:30:16.960 05:12:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:16.960 05:12:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:16.960 killing process with pid 149723 00:30:16.960 05:12:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149723' 00:30:16.960 05:12:46 -- common/autotest_common.sh@945 -- # kill 149723 00:30:16.960 05:12:46 -- common/autotest_common.sh@950 -- # wait 149723 00:30:17.219 05:12:47 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:17.219 00:30:17.219 real 0m6.475s 00:30:17.219 user 0m9.760s 00:30:17.219 sys 0m1.681s 00:30:17.219 05:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:17.219 05:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:17.219 ************************************ 00:30:17.219 END TEST bdev_nbd 00:30:17.219 ************************************ 00:30:17.219 05:12:47 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:17.219 05:12:47 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:17.219 05:12:47 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:17.219 skipping fio tests on NVMe due to multi-ns failures. 00:30:17.219 05:12:47 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:17.219 05:12:47 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:17.219 05:12:47 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:17.219 05:12:47 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:17.219 05:12:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:17.219 05:12:47 -- common/autotest_common.sh@10 -- # set +x 00:30:17.478 ************************************ 00:30:17.478 START TEST bdev_verify 00:30:17.478 ************************************ 00:30:17.478 05:12:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:17.478 [2024-04-27 05:12:47.201869] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:17.478 [2024-04-27 05:12:47.202113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149969 ] 00:30:17.478 [2024-04-27 05:12:47.365816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:17.736 [2024-04-27 05:12:47.466530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.736 [2024-04-27 05:12:47.466526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.994 Running I/O for 5 seconds... 00:30:23.291 00:30:23.291 Latency(us) 00:30:23.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.291 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:23.291 Verification LBA range: start 0x0 length 0x4ff80 00:30:23.291 Nvme0n1p1 : 5.02 7676.46 29.99 0.00 0.00 16627.38 1474.56 24665.37 00:30:23.291 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:23.291 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:23.291 Nvme0n1p1 : 5.02 7729.71 30.19 0.00 0.00 16510.83 3217.22 23354.65 00:30:23.291 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:23.291 Verification LBA range: start 0x0 length 0x4ff7f 00:30:23.291 Nvme0n1p2 : 5.02 7683.29 30.01 0.00 0.00 16602.55 372.36 21209.83 00:30:23.292 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:23.292 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:23.292 Nvme0n1p2 : 5.02 7740.15 30.23 0.00 0.00 16469.76 916.01 16801.05 00:30:23.292 =================================================================================================================== 00:30:23.292 Total : 30829.62 120.43 0.00 0.00 16552.38 372.36 24665.37 00:30:27.475 00:30:27.475 real 0m9.915s 00:30:27.475 user 0m18.899s 00:30:27.475 sys 0m0.340s 00:30:27.475 05:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.475 05:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:27.475 ************************************ 00:30:27.475 END TEST bdev_verify 00:30:27.475 ************************************ 00:30:27.475 05:12:57 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:27.475 05:12:57 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:27.475 05:12:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:27.475 05:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:27.475 ************************************ 00:30:27.475 START TEST bdev_verify_big_io 00:30:27.475 ************************************ 00:30:27.475 05:12:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:27.475 [2024-04-27 05:12:57.176783] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:27.475 [2024-04-27 05:12:57.178111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150070 ] 00:30:27.475 [2024-04-27 05:12:57.352538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.734 [2024-04-27 05:12:57.463380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.734 [2024-04-27 05:12:57.463386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.993 Running I/O for 5 seconds... 00:30:33.262 00:30:33.262 Latency(us) 00:30:33.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.262 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:33.262 Verification LBA range: start 0x0 length 0x4ff8 00:30:33.262 Nvme0n1p1 : 5.10 843.58 52.72 0.00 0.00 149231.79 25737.77 305040.29 00:30:33.262 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:33.262 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:33.262 Nvme0n1p1 : 5.10 910.96 56.94 0.00 0.00 138270.31 22997.18 151566.89 00:30:33.262 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:33.262 Verification LBA range: start 0x0 length 0x4ff7 00:30:33.262 Nvme0n1p2 : 5.13 865.86 54.12 0.00 0.00 142590.12 733.56 236406.23 00:30:33.262 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:33.262 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:33.262 Nvme0n1p2 : 5.13 932.24 58.26 0.00 0.00 134318.93 1325.61 148707.14 00:30:33.262 =================================================================================================================== 00:30:33.262 Total : 3552.64 222.04 0.00 0.00 140882.25 733.56 305040.29 00:30:33.836 00:30:33.836 real 0m6.510s 00:30:33.836 user 0m12.086s 00:30:33.836 sys 0m0.326s 00:30:33.836 05:13:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:33.836 05:13:03 -- common/autotest_common.sh@10 -- # set +x 00:30:33.836 ************************************ 00:30:33.836 END TEST bdev_verify_big_io 00:30:33.836 ************************************ 00:30:33.836 05:13:03 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:33.836 05:13:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:33.836 05:13:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:33.836 05:13:03 -- common/autotest_common.sh@10 -- # set +x 00:30:33.836 ************************************ 00:30:33.836 START TEST bdev_write_zeroes 00:30:33.836 ************************************ 00:30:33.836 05:13:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:33.836 [2024-04-27 05:13:03.746850] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:33.836 [2024-04-27 05:13:03.748123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150166 ] 00:30:34.095 [2024-04-27 05:13:03.921449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.095 [2024-04-27 05:13:04.018620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.659 Running I/O for 1 seconds... 00:30:35.593 00:30:35.593 Latency(us) 00:30:35.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.593 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:35.593 Nvme0n1p1 : 1.00 27083.59 105.80 0.00 0.00 4714.05 2278.87 16801.05 00:30:35.593 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:35.593 Nvme0n1p2 : 1.01 27098.54 105.85 0.00 0.00 4708.35 2129.92 13345.51 00:30:35.593 =================================================================================================================== 00:30:35.593 Total : 54182.13 211.65 0.00 0.00 4711.20 2129.92 16801.05 00:30:36.160 00:30:36.160 real 0m2.088s 00:30:36.160 user 0m1.661s 00:30:36.160 sys 0m0.325s 00:30:36.160 05:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.160 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:36.160 ************************************ 00:30:36.160 END TEST bdev_write_zeroes 00:30:36.160 ************************************ 00:30:36.160 05:13:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:36.160 05:13:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:36.160 05:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.160 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:36.160 ************************************ 00:30:36.160 START TEST bdev_json_nonenclosed 00:30:36.160 ************************************ 00:30:36.160 05:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:36.160 [2024-04-27 05:13:05.888514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:36.161 [2024-04-27 05:13:05.889608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150212 ] 00:30:36.161 [2024-04-27 05:13:06.062945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.418 [2024-04-27 05:13:06.191404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.418 [2024-04-27 05:13:06.191734] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:36.418 [2024-04-27 05:13:06.191786] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:36.676 00:30:36.676 real 0m0.522s 00:30:36.676 user 0m0.290s 00:30:36.676 sys 0m0.132s 00:30:36.676 05:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.676 05:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:36.676 ************************************ 00:30:36.676 END TEST bdev_json_nonenclosed 00:30:36.676 ************************************ 00:30:36.676 05:13:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:36.676 05:13:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:36.676 05:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.676 05:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:36.676 ************************************ 00:30:36.676 START TEST bdev_json_nonarray 00:30:36.676 ************************************ 00:30:36.676 05:13:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:36.676 [2024-04-27 05:13:06.465008] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:36.676 [2024-04-27 05:13:06.466181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150249 ] 00:30:36.934 [2024-04-27 05:13:06.637494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.934 [2024-04-27 05:13:06.759527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.934 [2024-04-27 05:13:06.759805] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:36.934 [2024-04-27 05:13:06.759859] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:37.192 00:30:37.193 real 0m0.518s 00:30:37.193 user 0m0.274s 00:30:37.193 sys 0m0.143s 00:30:37.193 05:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.193 05:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:37.193 ************************************ 00:30:37.193 END TEST bdev_json_nonarray 00:30:37.193 ************************************ 00:30:37.193 05:13:06 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:37.193 05:13:06 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:37.193 05:13:06 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:37.193 05:13:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:37.193 05:13:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.193 05:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:37.193 ************************************ 00:30:37.193 START TEST bdev_gpt_uuid 00:30:37.193 ************************************ 00:30:37.193 05:13:06 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:30:37.193 05:13:06 -- bdev/blockdev.sh@612 -- # local bdev 00:30:37.193 05:13:06 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:37.193 05:13:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150280 00:30:37.193 05:13:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:37.193 05:13:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:37.193 05:13:06 -- bdev/blockdev.sh@47 -- # waitforlisten 150280 00:30:37.193 05:13:06 -- common/autotest_common.sh@819 -- # '[' -z 150280 ']' 00:30:37.193 05:13:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.193 05:13:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:37.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.193 05:13:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.193 05:13:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:37.193 05:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:37.193 [2024-04-27 05:13:07.061637] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:37.193 [2024-04-27 05:13:07.061927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150280 ] 00:30:37.451 [2024-04-27 05:13:07.233888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.451 [2024-04-27 05:13:07.358461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:37.451 [2024-04-27 05:13:07.358745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.385 05:13:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:38.385 05:13:08 -- common/autotest_common.sh@852 -- # return 0 00:30:38.385 05:13:08 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:38.385 05:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.385 05:13:08 -- common/autotest_common.sh@10 -- # set +x 00:30:38.385 Some configs were skipped because the RPC state that can call them passed over. 00:30:38.385 05:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.385 05:13:08 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:38.385 05:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.385 05:13:08 -- common/autotest_common.sh@10 -- # set +x 00:30:38.385 05:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.385 05:13:08 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:38.385 05:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.385 05:13:08 -- common/autotest_common.sh@10 -- # set +x 00:30:38.385 05:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.385 05:13:08 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:38.385 { 00:30:38.385 "name": "Nvme0n1p1", 00:30:38.385 "aliases": [ 00:30:38.385 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:38.385 ], 00:30:38.385 "product_name": "GPT Disk", 00:30:38.385 "block_size": 4096, 00:30:38.385 "num_blocks": 655104, 00:30:38.385 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:38.385 "assigned_rate_limits": { 00:30:38.385 "rw_ios_per_sec": 0, 00:30:38.385 "rw_mbytes_per_sec": 0, 00:30:38.385 "r_mbytes_per_sec": 0, 00:30:38.385 "w_mbytes_per_sec": 0 00:30:38.385 }, 00:30:38.385 "claimed": false, 00:30:38.385 "zoned": false, 00:30:38.385 "supported_io_types": { 00:30:38.385 "read": true, 00:30:38.385 "write": true, 00:30:38.385 "unmap": true, 00:30:38.385 "write_zeroes": true, 00:30:38.385 "flush": true, 00:30:38.385 "reset": true, 00:30:38.385 "compare": true, 00:30:38.385 "compare_and_write": false, 00:30:38.385 "abort": true, 00:30:38.385 "nvme_admin": false, 00:30:38.385 "nvme_io": false 00:30:38.385 }, 00:30:38.385 "driver_specific": { 00:30:38.385 "gpt": { 00:30:38.385 "base_bdev": "Nvme0n1", 00:30:38.385 "offset_blocks": 256, 00:30:38.385 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:38.385 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:38.385 "partition_name": "SPDK_TEST_first" 00:30:38.385 } 00:30:38.385 } 00:30:38.385 } 00:30:38.385 ]' 00:30:38.385 05:13:08 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:38.385 05:13:08 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:38.385 05:13:08 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:38.385 05:13:08 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:38.385 05:13:08 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:38.644 05:13:08 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:38.644 05:13:08 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:38.644 05:13:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.644 05:13:08 -- common/autotest_common.sh@10 -- # set +x 00:30:38.644 05:13:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.644 05:13:08 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:38.644 { 00:30:38.644 "name": "Nvme0n1p2", 00:30:38.644 "aliases": [ 00:30:38.644 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:38.644 ], 00:30:38.644 "product_name": "GPT Disk", 00:30:38.644 "block_size": 4096, 00:30:38.644 "num_blocks": 655103, 00:30:38.644 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:38.644 "assigned_rate_limits": { 00:30:38.644 "rw_ios_per_sec": 0, 00:30:38.644 "rw_mbytes_per_sec": 0, 00:30:38.644 "r_mbytes_per_sec": 0, 00:30:38.644 "w_mbytes_per_sec": 0 00:30:38.644 }, 00:30:38.644 "claimed": false, 00:30:38.644 "zoned": false, 00:30:38.644 "supported_io_types": { 00:30:38.644 "read": true, 00:30:38.644 "write": true, 00:30:38.644 "unmap": true, 00:30:38.644 "write_zeroes": true, 00:30:38.644 "flush": true, 00:30:38.644 "reset": true, 00:30:38.644 "compare": true, 00:30:38.644 "compare_and_write": false, 00:30:38.644 "abort": true, 00:30:38.644 "nvme_admin": false, 00:30:38.644 "nvme_io": false 00:30:38.644 }, 00:30:38.644 "driver_specific": { 00:30:38.644 "gpt": { 00:30:38.644 "base_bdev": "Nvme0n1", 00:30:38.644 "offset_blocks": 655360, 00:30:38.644 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:38.644 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:38.644 "partition_name": "SPDK_TEST_second" 00:30:38.644 } 00:30:38.644 } 00:30:38.644 } 00:30:38.644 ]' 00:30:38.644 05:13:08 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:38.644 05:13:08 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:38.644 05:13:08 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:38.644 05:13:08 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:38.644 05:13:08 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:38.644 05:13:08 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:38.644 05:13:08 -- bdev/blockdev.sh@629 -- # killprocess 150280 00:30:38.644 05:13:08 -- common/autotest_common.sh@926 -- # '[' -z 150280 ']' 00:30:38.644 05:13:08 -- common/autotest_common.sh@930 -- # kill -0 150280 00:30:38.644 05:13:08 -- common/autotest_common.sh@931 -- # uname 00:30:38.644 05:13:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.644 05:13:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150280 00:30:38.644 05:13:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:38.644 05:13:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:38.644 killing process with pid 150280 00:30:38.644 05:13:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150280' 00:30:38.644 05:13:08 -- common/autotest_common.sh@945 -- # kill 150280 00:30:38.644 05:13:08 -- common/autotest_common.sh@950 -- # wait 150280 00:30:39.583 00:30:39.583 real 0m2.299s 00:30:39.583 user 0m2.432s 00:30:39.583 sys 0m0.641s 00:30:39.583 05:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.583 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:39.583 ************************************ 00:30:39.583 END TEST bdev_gpt_uuid 00:30:39.583 ************************************ 00:30:39.583 05:13:09 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:39.583 05:13:09 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:39.583 05:13:09 -- bdev/blockdev.sh@809 -- # cleanup 00:30:39.583 05:13:09 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:39.583 05:13:09 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:39.583 05:13:09 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:39.583 05:13:09 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:39.583 05:13:09 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:39.583 05:13:09 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:39.842 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:39.842 Waiting for block devices as requested 00:30:39.842 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:40.100 05:13:09 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:40.100 05:13:09 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:40.100 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:40.100 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:40.100 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:40.100 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:40.100 05:13:09 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:40.100 00:30:40.101 real 0m38.590s 00:30:40.101 user 0m57.382s 00:30:40.101 sys 0m7.195s 00:30:40.101 05:13:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.101 ************************************ 00:30:40.101 END TEST blockdev_nvme_gpt 00:30:40.101 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:40.101 ************************************ 00:30:40.101 05:13:09 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:40.101 05:13:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.101 05:13:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.101 05:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:40.101 ************************************ 00:30:40.101 START TEST nvme 00:30:40.101 ************************************ 00:30:40.101 05:13:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:40.101 * Looking for test storage... 00:30:40.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:40.101 05:13:09 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:40.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:40.667 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:42.573 05:13:12 -- nvme/nvme.sh@79 -- # uname 00:30:42.574 05:13:12 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:42.574 05:13:12 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:42.574 05:13:12 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:42.574 05:13:12 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:42.574 05:13:12 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:30:42.574 05:13:12 -- common/autotest_common.sh@1045 -- # echo 0 00:30:42.574 05:13:12 -- common/autotest_common.sh@1047 -- # stubpid=150679 00:30:42.574 Waiting for stub to ready for secondary processes... 00:30:42.574 05:13:12 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:42.574 05:13:12 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:30:42.574 05:13:12 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:42.574 05:13:12 -- common/autotest_common.sh@1051 -- # [[ -e /proc/150679 ]] 00:30:42.574 05:13:12 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:42.574 [2024-04-27 05:13:12.468088] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:42.574 [2024-04-27 05:13:12.468376] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.509 05:13:13 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:43.509 05:13:13 -- common/autotest_common.sh@1051 -- # [[ -e /proc/150679 ]] 00:30:43.509 05:13:13 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:44.884 05:13:14 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:44.884 05:13:14 -- common/autotest_common.sh@1051 -- # [[ -e /proc/150679 ]] 00:30:44.884 05:13:14 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:44.884 [2024-04-27 05:13:14.636742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:44.884 [2024-04-27 05:13:14.730975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.884 [2024-04-27 05:13:14.731163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.884 [2024-04-27 05:13:14.731167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.884 [2024-04-27 05:13:14.742628] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:44.884 [2024-04-27 05:13:14.753292] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:44.884 [2024-04-27 05:13:14.754139] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:45.821 05:13:15 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:45.821 done. 00:30:45.821 05:13:15 -- common/autotest_common.sh@1054 -- # echo done. 00:30:45.821 05:13:15 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:45.821 05:13:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:30:45.821 05:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:45.821 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:45.821 ************************************ 00:30:45.821 START TEST nvme_reset 00:30:45.821 ************************************ 00:30:45.821 05:13:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:45.821 Initializing NVMe Controllers 00:30:45.821 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:45.821 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:46.080 00:30:46.080 real 0m0.291s 00:30:46.080 user 0m0.104s 00:30:46.080 sys 0m0.125s 00:30:46.080 05:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.080 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:46.080 ************************************ 00:30:46.080 END TEST nvme_reset 00:30:46.080 ************************************ 00:30:46.080 05:13:15 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:46.080 05:13:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:46.080 05:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:46.080 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:46.080 ************************************ 00:30:46.080 START TEST nvme_identify 00:30:46.080 ************************************ 00:30:46.080 05:13:15 -- common/autotest_common.sh@1104 -- # nvme_identify 00:30:46.080 05:13:15 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:46.080 05:13:15 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:46.080 05:13:15 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:46.080 05:13:15 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:46.080 05:13:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:46.080 05:13:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:46.080 05:13:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:46.080 05:13:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:46.080 05:13:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:46.080 05:13:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:46.080 05:13:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:46.080 05:13:15 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:46.341 [2024-04-27 05:13:16.095456] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 150735 terminated unexpected 00:30:46.341 ===================================================== 00:30:46.341 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:46.341 ===================================================== 00:30:46.341 Controller Capabilities/Features 00:30:46.341 ================================ 00:30:46.341 Vendor ID: 1b36 00:30:46.341 Subsystem Vendor ID: 1af4 00:30:46.341 Serial Number: 12340 00:30:46.341 Model Number: QEMU NVMe Ctrl 00:30:46.341 Firmware Version: 8.0.0 00:30:46.341 Recommended Arb Burst: 6 00:30:46.341 IEEE OUI Identifier: 00 54 52 00:30:46.341 Multi-path I/O 00:30:46.341 May have multiple subsystem ports: No 00:30:46.341 May have multiple controllers: No 00:30:46.341 Associated with SR-IOV VF: No 00:30:46.341 Max Data Transfer Size: 524288 00:30:46.341 Max Number of Namespaces: 256 00:30:46.341 Max Number of I/O Queues: 64 00:30:46.341 NVMe Specification Version (VS): 1.4 00:30:46.341 NVMe Specification Version (Identify): 1.4 00:30:46.341 Maximum Queue Entries: 2048 00:30:46.341 Contiguous Queues Required: Yes 00:30:46.341 Arbitration Mechanisms Supported 00:30:46.341 Weighted Round Robin: Not Supported 00:30:46.341 Vendor Specific: Not Supported 00:30:46.341 Reset Timeout: 7500 ms 00:30:46.341 Doorbell Stride: 4 bytes 00:30:46.341 NVM Subsystem Reset: Not Supported 00:30:46.341 Command Sets Supported 00:30:46.341 NVM Command Set: Supported 00:30:46.341 Boot Partition: Not Supported 00:30:46.341 Memory Page Size Minimum: 4096 bytes 00:30:46.341 Memory Page Size Maximum: 65536 bytes 00:30:46.341 Persistent Memory Region: Not Supported 00:30:46.341 Optional Asynchronous Events Supported 00:30:46.341 Namespace Attribute Notices: Supported 00:30:46.341 Firmware Activation Notices: Not Supported 00:30:46.341 ANA Change Notices: Not Supported 00:30:46.341 PLE Aggregate Log Change Notices: Not Supported 00:30:46.341 LBA Status Info Alert Notices: Not Supported 00:30:46.341 EGE Aggregate Log Change Notices: Not Supported 00:30:46.341 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.341 Zone Descriptor Change Notices: Not Supported 00:30:46.341 Discovery Log Change Notices: Not Supported 00:30:46.341 Controller Attributes 00:30:46.341 128-bit Host Identifier: Not Supported 00:30:46.341 Non-Operational Permissive Mode: Not Supported 00:30:46.341 NVM Sets: Not Supported 00:30:46.341 Read Recovery Levels: Not Supported 00:30:46.341 Endurance Groups: Not Supported 00:30:46.341 Predictable Latency Mode: Not Supported 00:30:46.341 Traffic Based Keep ALive: Not Supported 00:30:46.341 Namespace Granularity: Not Supported 00:30:46.341 SQ Associations: Not Supported 00:30:46.341 UUID List: Not Supported 00:30:46.341 Multi-Domain Subsystem: Not Supported 00:30:46.341 Fixed Capacity Management: Not Supported 00:30:46.341 Variable Capacity Management: Not Supported 00:30:46.341 Delete Endurance Group: Not Supported 00:30:46.341 Delete NVM Set: Not Supported 00:30:46.341 Extended LBA Formats Supported: Supported 00:30:46.341 Flexible Data Placement Supported: Not Supported 00:30:46.341 00:30:46.341 Controller Memory Buffer Support 00:30:46.341 ================================ 00:30:46.341 Supported: No 00:30:46.341 00:30:46.341 Persistent Memory Region Support 00:30:46.341 ================================ 00:30:46.341 Supported: No 00:30:46.341 00:30:46.341 Admin Command Set Attributes 00:30:46.341 ============================ 00:30:46.341 Security Send/Receive: Not Supported 00:30:46.341 Format NVM: Supported 00:30:46.341 Firmware Activate/Download: Not Supported 00:30:46.341 Namespace Management: Supported 00:30:46.341 Device Self-Test: Not Supported 00:30:46.341 Directives: Supported 00:30:46.341 NVMe-MI: Not Supported 00:30:46.341 Virtualization Management: Not Supported 00:30:46.341 Doorbell Buffer Config: Supported 00:30:46.341 Get LBA Status Capability: Not Supported 00:30:46.341 Command & Feature Lockdown Capability: Not Supported 00:30:46.341 Abort Command Limit: 4 00:30:46.341 Async Event Request Limit: 4 00:30:46.341 Number of Firmware Slots: N/A 00:30:46.341 Firmware Slot 1 Read-Only: N/A 00:30:46.341 Firmware Activation Without Reset: N/A 00:30:46.341 Multiple Update Detection Support: N/A 00:30:46.341 Firmware Update Granularity: No Information Provided 00:30:46.341 Per-Namespace SMART Log: Yes 00:30:46.341 Asymmetric Namespace Access Log Page: Not Supported 00:30:46.341 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:46.341 Command Effects Log Page: Supported 00:30:46.341 Get Log Page Extended Data: Supported 00:30:46.341 Telemetry Log Pages: Not Supported 00:30:46.341 Persistent Event Log Pages: Not Supported 00:30:46.341 Supported Log Pages Log Page: May Support 00:30:46.341 Commands Supported & Effects Log Page: Not Supported 00:30:46.341 Feature Identifiers & Effects Log Page:May Support 00:30:46.341 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.341 Data Area 4 for Telemetry Log: Not Supported 00:30:46.341 Error Log Page Entries Supported: 1 00:30:46.341 Keep Alive: Not Supported 00:30:46.341 00:30:46.341 NVM Command Set Attributes 00:30:46.341 ========================== 00:30:46.341 Submission Queue Entry Size 00:30:46.341 Max: 64 00:30:46.341 Min: 64 00:30:46.341 Completion Queue Entry Size 00:30:46.341 Max: 16 00:30:46.341 Min: 16 00:30:46.341 Number of Namespaces: 256 00:30:46.341 Compare Command: Supported 00:30:46.341 Write Uncorrectable Command: Not Supported 00:30:46.341 Dataset Management Command: Supported 00:30:46.341 Write Zeroes Command: Supported 00:30:46.341 Set Features Save Field: Supported 00:30:46.341 Reservations: Not Supported 00:30:46.341 Timestamp: Supported 00:30:46.341 Copy: Supported 00:30:46.341 Volatile Write Cache: Present 00:30:46.341 Atomic Write Unit (Normal): 1 00:30:46.341 Atomic Write Unit (PFail): 1 00:30:46.341 Atomic Compare & Write Unit: 1 00:30:46.341 Fused Compare & Write: Not Supported 00:30:46.341 Scatter-Gather List 00:30:46.341 SGL Command Set: Supported 00:30:46.341 SGL Keyed: Not Supported 00:30:46.341 SGL Bit Bucket Descriptor: Not Supported 00:30:46.341 SGL Metadata Pointer: Not Supported 00:30:46.341 Oversized SGL: Not Supported 00:30:46.341 SGL Metadata Address: Not Supported 00:30:46.341 SGL Offset: Not Supported 00:30:46.341 Transport SGL Data Block: Not Supported 00:30:46.341 Replay Protected Memory Block: Not Supported 00:30:46.341 00:30:46.341 Firmware Slot Information 00:30:46.341 ========================= 00:30:46.341 Active slot: 1 00:30:46.341 Slot 1 Firmware Revision: 1.0 00:30:46.341 00:30:46.341 00:30:46.341 Commands Supported and Effects 00:30:46.341 ============================== 00:30:46.341 Admin Commands 00:30:46.341 -------------- 00:30:46.341 Delete I/O Submission Queue (00h): Supported 00:30:46.341 Create I/O Submission Queue (01h): Supported 00:30:46.341 Get Log Page (02h): Supported 00:30:46.341 Delete I/O Completion Queue (04h): Supported 00:30:46.341 Create I/O Completion Queue (05h): Supported 00:30:46.341 Identify (06h): Supported 00:30:46.341 Abort (08h): Supported 00:30:46.341 Set Features (09h): Supported 00:30:46.341 Get Features (0Ah): Supported 00:30:46.341 Asynchronous Event Request (0Ch): Supported 00:30:46.342 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:46.342 Directive Send (19h): Supported 00:30:46.342 Directive Receive (1Ah): Supported 00:30:46.342 Virtualization Management (1Ch): Supported 00:30:46.342 Doorbell Buffer Config (7Ch): Supported 00:30:46.342 Format NVM (80h): Supported LBA-Change 00:30:46.342 I/O Commands 00:30:46.342 ------------ 00:30:46.342 Flush (00h): Supported LBA-Change 00:30:46.342 Write (01h): Supported LBA-Change 00:30:46.342 Read (02h): Supported 00:30:46.342 Compare (05h): Supported 00:30:46.342 Write Zeroes (08h): Supported LBA-Change 00:30:46.342 Dataset Management (09h): Supported LBA-Change 00:30:46.342 Unknown (0Ch): Supported 00:30:46.342 Unknown (12h): Supported 00:30:46.342 Copy (19h): Supported LBA-Change 00:30:46.342 Unknown (1Dh): Supported LBA-Change 00:30:46.342 00:30:46.342 Error Log 00:30:46.342 ========= 00:30:46.342 00:30:46.342 Arbitration 00:30:46.342 =========== 00:30:46.342 Arbitration Burst: no limit 00:30:46.342 00:30:46.342 Power Management 00:30:46.342 ================ 00:30:46.342 Number of Power States: 1 00:30:46.342 Current Power State: Power State #0 00:30:46.342 Power State #0: 00:30:46.342 Max Power: 25.00 W 00:30:46.342 Non-Operational State: Operational 00:30:46.342 Entry Latency: 16 microseconds 00:30:46.342 Exit Latency: 4 microseconds 00:30:46.342 Relative Read Throughput: 0 00:30:46.342 Relative Read Latency: 0 00:30:46.342 Relative Write Throughput: 0 00:30:46.342 Relative Write Latency: 0 00:30:46.342 Idle Power: Not Reported 00:30:46.342 Active Power: Not Reported 00:30:46.342 Non-Operational Permissive Mode: Not Supported 00:30:46.342 00:30:46.342 Health Information 00:30:46.342 ================== 00:30:46.342 Critical Warnings: 00:30:46.342 Available Spare Space: OK 00:30:46.342 Temperature: OK 00:30:46.342 Device Reliability: OK 00:30:46.342 Read Only: No 00:30:46.342 Volatile Memory Backup: OK 00:30:46.342 Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.342 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:46.342 Available Spare: 0% 00:30:46.342 Available Spare Threshold: 0% 00:30:46.342 Life Percentage Used: 0% 00:30:46.342 Data Units Read: 7785 00:30:46.342 Data Units Written: 3792 00:30:46.342 Host Read Commands: 371124 00:30:46.342 Host Write Commands: 200750 00:30:46.342 Controller Busy Time: 0 minutes 00:30:46.342 Power Cycles: 0 00:30:46.342 Power On Hours: 0 hours 00:30:46.342 Unsafe Shutdowns: 0 00:30:46.342 Unrecoverable Media Errors: 0 00:30:46.342 Lifetime Error Log Entries: 0 00:30:46.342 Warning Temperature Time: 0 minutes 00:30:46.342 Critical Temperature Time: 0 minutes 00:30:46.342 00:30:46.342 Number of Queues 00:30:46.342 ================ 00:30:46.342 Number of I/O Submission Queues: 64 00:30:46.342 Number of I/O Completion Queues: 64 00:30:46.342 00:30:46.342 ZNS Specific Controller Data 00:30:46.342 ============================ 00:30:46.342 Zone Append Size Limit: 0 00:30:46.342 00:30:46.342 00:30:46.342 Active Namespaces 00:30:46.342 ================= 00:30:46.342 Namespace ID:1 00:30:46.342 Error Recovery Timeout: Unlimited 00:30:46.342 Command Set Identifier: NVM (00h) 00:30:46.342 Deallocate: Supported 00:30:46.342 Deallocated/Unwritten Error: Supported 00:30:46.342 Deallocated Read Value: All 0x00 00:30:46.342 Deallocate in Write Zeroes: Not Supported 00:30:46.342 Deallocated Guard Field: 0xFFFF 00:30:46.342 Flush: Supported 00:30:46.342 Reservation: Not Supported 00:30:46.342 Namespace Sharing Capabilities: Private 00:30:46.342 Size (in LBAs): 1310720 (5GiB) 00:30:46.342 Capacity (in LBAs): 1310720 (5GiB) 00:30:46.342 Utilization (in LBAs): 1310720 (5GiB) 00:30:46.342 Thin Provisioning: Not Supported 00:30:46.342 Per-NS Atomic Units: No 00:30:46.342 Maximum Single Source Range Length: 128 00:30:46.342 Maximum Copy Length: 128 00:30:46.342 Maximum Source Range Count: 128 00:30:46.342 NGUID/EUI64 Never Reused: No 00:30:46.342 Namespace Write Protected: No 00:30:46.342 Number of LBA Formats: 8 00:30:46.342 Current LBA Format: LBA Format #04 00:30:46.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:46.342 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:46.342 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:46.342 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:46.342 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:46.342 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:46.342 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:46.342 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:46.342 00:30:46.342 05:13:16 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:46.342 05:13:16 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:46.603 ===================================================== 00:30:46.603 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:46.603 ===================================================== 00:30:46.603 Controller Capabilities/Features 00:30:46.603 ================================ 00:30:46.603 Vendor ID: 1b36 00:30:46.603 Subsystem Vendor ID: 1af4 00:30:46.603 Serial Number: 12340 00:30:46.603 Model Number: QEMU NVMe Ctrl 00:30:46.603 Firmware Version: 8.0.0 00:30:46.603 Recommended Arb Burst: 6 00:30:46.603 IEEE OUI Identifier: 00 54 52 00:30:46.603 Multi-path I/O 00:30:46.603 May have multiple subsystem ports: No 00:30:46.603 May have multiple controllers: No 00:30:46.603 Associated with SR-IOV VF: No 00:30:46.603 Max Data Transfer Size: 524288 00:30:46.603 Max Number of Namespaces: 256 00:30:46.603 Max Number of I/O Queues: 64 00:30:46.603 NVMe Specification Version (VS): 1.4 00:30:46.603 NVMe Specification Version (Identify): 1.4 00:30:46.603 Maximum Queue Entries: 2048 00:30:46.603 Contiguous Queues Required: Yes 00:30:46.603 Arbitration Mechanisms Supported 00:30:46.603 Weighted Round Robin: Not Supported 00:30:46.603 Vendor Specific: Not Supported 00:30:46.603 Reset Timeout: 7500 ms 00:30:46.603 Doorbell Stride: 4 bytes 00:30:46.603 NVM Subsystem Reset: Not Supported 00:30:46.603 Command Sets Supported 00:30:46.603 NVM Command Set: Supported 00:30:46.603 Boot Partition: Not Supported 00:30:46.603 Memory Page Size Minimum: 4096 bytes 00:30:46.603 Memory Page Size Maximum: 65536 bytes 00:30:46.603 Persistent Memory Region: Not Supported 00:30:46.603 Optional Asynchronous Events Supported 00:30:46.603 Namespace Attribute Notices: Supported 00:30:46.603 Firmware Activation Notices: Not Supported 00:30:46.603 ANA Change Notices: Not Supported 00:30:46.603 PLE Aggregate Log Change Notices: Not Supported 00:30:46.603 LBA Status Info Alert Notices: Not Supported 00:30:46.603 EGE Aggregate Log Change Notices: Not Supported 00:30:46.603 Normal NVM Subsystem Shutdown event: Not Supported 00:30:46.603 Zone Descriptor Change Notices: Not Supported 00:30:46.603 Discovery Log Change Notices: Not Supported 00:30:46.603 Controller Attributes 00:30:46.603 128-bit Host Identifier: Not Supported 00:30:46.603 Non-Operational Permissive Mode: Not Supported 00:30:46.603 NVM Sets: Not Supported 00:30:46.603 Read Recovery Levels: Not Supported 00:30:46.603 Endurance Groups: Not Supported 00:30:46.603 Predictable Latency Mode: Not Supported 00:30:46.603 Traffic Based Keep ALive: Not Supported 00:30:46.603 Namespace Granularity: Not Supported 00:30:46.603 SQ Associations: Not Supported 00:30:46.603 UUID List: Not Supported 00:30:46.603 Multi-Domain Subsystem: Not Supported 00:30:46.603 Fixed Capacity Management: Not Supported 00:30:46.603 Variable Capacity Management: Not Supported 00:30:46.603 Delete Endurance Group: Not Supported 00:30:46.603 Delete NVM Set: Not Supported 00:30:46.603 Extended LBA Formats Supported: Supported 00:30:46.603 Flexible Data Placement Supported: Not Supported 00:30:46.603 00:30:46.603 Controller Memory Buffer Support 00:30:46.603 ================================ 00:30:46.603 Supported: No 00:30:46.603 00:30:46.603 Persistent Memory Region Support 00:30:46.603 ================================ 00:30:46.603 Supported: No 00:30:46.603 00:30:46.603 Admin Command Set Attributes 00:30:46.603 ============================ 00:30:46.603 Security Send/Receive: Not Supported 00:30:46.603 Format NVM: Supported 00:30:46.603 Firmware Activate/Download: Not Supported 00:30:46.603 Namespace Management: Supported 00:30:46.603 Device Self-Test: Not Supported 00:30:46.603 Directives: Supported 00:30:46.603 NVMe-MI: Not Supported 00:30:46.603 Virtualization Management: Not Supported 00:30:46.603 Doorbell Buffer Config: Supported 00:30:46.603 Get LBA Status Capability: Not Supported 00:30:46.603 Command & Feature Lockdown Capability: Not Supported 00:30:46.603 Abort Command Limit: 4 00:30:46.603 Async Event Request Limit: 4 00:30:46.603 Number of Firmware Slots: N/A 00:30:46.603 Firmware Slot 1 Read-Only: N/A 00:30:46.603 Firmware Activation Without Reset: N/A 00:30:46.603 Multiple Update Detection Support: N/A 00:30:46.603 Firmware Update Granularity: No Information Provided 00:30:46.603 Per-Namespace SMART Log: Yes 00:30:46.604 Asymmetric Namespace Access Log Page: Not Supported 00:30:46.604 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:46.604 Command Effects Log Page: Supported 00:30:46.604 Get Log Page Extended Data: Supported 00:30:46.604 Telemetry Log Pages: Not Supported 00:30:46.604 Persistent Event Log Pages: Not Supported 00:30:46.604 Supported Log Pages Log Page: May Support 00:30:46.604 Commands Supported & Effects Log Page: Not Supported 00:30:46.604 Feature Identifiers & Effects Log Page:May Support 00:30:46.604 NVMe-MI Commands & Effects Log Page: May Support 00:30:46.604 Data Area 4 for Telemetry Log: Not Supported 00:30:46.604 Error Log Page Entries Supported: 1 00:30:46.604 Keep Alive: Not Supported 00:30:46.604 00:30:46.604 NVM Command Set Attributes 00:30:46.604 ========================== 00:30:46.604 Submission Queue Entry Size 00:30:46.604 Max: 64 00:30:46.604 Min: 64 00:30:46.604 Completion Queue Entry Size 00:30:46.604 Max: 16 00:30:46.604 Min: 16 00:30:46.604 Number of Namespaces: 256 00:30:46.604 Compare Command: Supported 00:30:46.604 Write Uncorrectable Command: Not Supported 00:30:46.604 Dataset Management Command: Supported 00:30:46.604 Write Zeroes Command: Supported 00:30:46.604 Set Features Save Field: Supported 00:30:46.604 Reservations: Not Supported 00:30:46.604 Timestamp: Supported 00:30:46.604 Copy: Supported 00:30:46.604 Volatile Write Cache: Present 00:30:46.604 Atomic Write Unit (Normal): 1 00:30:46.604 Atomic Write Unit (PFail): 1 00:30:46.604 Atomic Compare & Write Unit: 1 00:30:46.604 Fused Compare & Write: Not Supported 00:30:46.604 Scatter-Gather List 00:30:46.604 SGL Command Set: Supported 00:30:46.604 SGL Keyed: Not Supported 00:30:46.604 SGL Bit Bucket Descriptor: Not Supported 00:30:46.604 SGL Metadata Pointer: Not Supported 00:30:46.604 Oversized SGL: Not Supported 00:30:46.604 SGL Metadata Address: Not Supported 00:30:46.604 SGL Offset: Not Supported 00:30:46.604 Transport SGL Data Block: Not Supported 00:30:46.604 Replay Protected Memory Block: Not Supported 00:30:46.604 00:30:46.604 Firmware Slot Information 00:30:46.604 ========================= 00:30:46.604 Active slot: 1 00:30:46.604 Slot 1 Firmware Revision: 1.0 00:30:46.604 00:30:46.604 00:30:46.604 Commands Supported and Effects 00:30:46.604 ============================== 00:30:46.604 Admin Commands 00:30:46.604 -------------- 00:30:46.604 Delete I/O Submission Queue (00h): Supported 00:30:46.604 Create I/O Submission Queue (01h): Supported 00:30:46.604 Get Log Page (02h): Supported 00:30:46.604 Delete I/O Completion Queue (04h): Supported 00:30:46.604 Create I/O Completion Queue (05h): Supported 00:30:46.604 Identify (06h): Supported 00:30:46.604 Abort (08h): Supported 00:30:46.604 Set Features (09h): Supported 00:30:46.604 Get Features (0Ah): Supported 00:30:46.604 Asynchronous Event Request (0Ch): Supported 00:30:46.604 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:46.604 Directive Send (19h): Supported 00:30:46.604 Directive Receive (1Ah): Supported 00:30:46.604 Virtualization Management (1Ch): Supported 00:30:46.604 Doorbell Buffer Config (7Ch): Supported 00:30:46.604 Format NVM (80h): Supported LBA-Change 00:30:46.604 I/O Commands 00:30:46.604 ------------ 00:30:46.604 Flush (00h): Supported LBA-Change 00:30:46.604 Write (01h): Supported LBA-Change 00:30:46.604 Read (02h): Supported 00:30:46.604 Compare (05h): Supported 00:30:46.604 Write Zeroes (08h): Supported LBA-Change 00:30:46.604 Dataset Management (09h): Supported LBA-Change 00:30:46.604 Unknown (0Ch): Supported 00:30:46.604 Unknown (12h): Supported 00:30:46.604 Copy (19h): Supported LBA-Change 00:30:46.604 Unknown (1Dh): Supported LBA-Change 00:30:46.604 00:30:46.604 Error Log 00:30:46.604 ========= 00:30:46.604 00:30:46.604 Arbitration 00:30:46.604 =========== 00:30:46.604 Arbitration Burst: no limit 00:30:46.604 00:30:46.604 Power Management 00:30:46.604 ================ 00:30:46.604 Number of Power States: 1 00:30:46.604 Current Power State: Power State #0 00:30:46.604 Power State #0: 00:30:46.604 Max Power: 25.00 W 00:30:46.604 Non-Operational State: Operational 00:30:46.604 Entry Latency: 16 microseconds 00:30:46.604 Exit Latency: 4 microseconds 00:30:46.604 Relative Read Throughput: 0 00:30:46.604 Relative Read Latency: 0 00:30:46.604 Relative Write Throughput: 0 00:30:46.604 Relative Write Latency: 0 00:30:46.604 Idle Power: Not Reported 00:30:46.604 Active Power: Not Reported 00:30:46.604 Non-Operational Permissive Mode: Not Supported 00:30:46.604 00:30:46.604 Health Information 00:30:46.604 ================== 00:30:46.604 Critical Warnings: 00:30:46.604 Available Spare Space: OK 00:30:46.604 Temperature: OK 00:30:46.604 Device Reliability: OK 00:30:46.604 Read Only: No 00:30:46.604 Volatile Memory Backup: OK 00:30:46.604 Current Temperature: 323 Kelvin (50 Celsius) 00:30:46.604 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:46.604 Available Spare: 0% 00:30:46.604 Available Spare Threshold: 0% 00:30:46.604 Life Percentage Used: 0% 00:30:46.604 Data Units Read: 7785 00:30:46.604 Data Units Written: 3792 00:30:46.604 Host Read Commands: 371124 00:30:46.604 Host Write Commands: 200750 00:30:46.604 Controller Busy Time: 0 minutes 00:30:46.604 Power Cycles: 0 00:30:46.604 Power On Hours: 0 hours 00:30:46.604 Unsafe Shutdowns: 0 00:30:46.604 Unrecoverable Media Errors: 0 00:30:46.604 Lifetime Error Log Entries: 0 00:30:46.604 Warning Temperature Time: 0 minutes 00:30:46.604 Critical Temperature Time: 0 minutes 00:30:46.604 00:30:46.604 Number of Queues 00:30:46.604 ================ 00:30:46.604 Number of I/O Submission Queues: 64 00:30:46.604 Number of I/O Completion Queues: 64 00:30:46.604 00:30:46.604 ZNS Specific Controller Data 00:30:46.604 ============================ 00:30:46.604 Zone Append Size Limit: 0 00:30:46.604 00:30:46.604 00:30:46.604 Active Namespaces 00:30:46.604 ================= 00:30:46.604 Namespace ID:1 00:30:46.604 Error Recovery Timeout: Unlimited 00:30:46.604 Command Set Identifier: NVM (00h) 00:30:46.604 Deallocate: Supported 00:30:46.604 Deallocated/Unwritten Error: Supported 00:30:46.604 Deallocated Read Value: All 0x00 00:30:46.604 Deallocate in Write Zeroes: Not Supported 00:30:46.604 Deallocated Guard Field: 0xFFFF 00:30:46.604 Flush: Supported 00:30:46.604 Reservation: Not Supported 00:30:46.604 Namespace Sharing Capabilities: Private 00:30:46.604 Size (in LBAs): 1310720 (5GiB) 00:30:46.604 Capacity (in LBAs): 1310720 (5GiB) 00:30:46.604 Utilization (in LBAs): 1310720 (5GiB) 00:30:46.604 Thin Provisioning: Not Supported 00:30:46.604 Per-NS Atomic Units: No 00:30:46.604 Maximum Single Source Range Length: 128 00:30:46.604 Maximum Copy Length: 128 00:30:46.604 Maximum Source Range Count: 128 00:30:46.604 NGUID/EUI64 Never Reused: No 00:30:46.604 Namespace Write Protected: No 00:30:46.604 Number of LBA Formats: 8 00:30:46.604 Current LBA Format: LBA Format #04 00:30:46.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:46.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:46.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:46.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:46.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:46.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:46.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:46.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:46.604 00:30:46.604 00:30:46.604 real 0m0.677s 00:30:46.604 user 0m0.262s 00:30:46.604 sys 0m0.310s 00:30:46.604 05:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.604 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:30:46.604 ************************************ 00:30:46.604 END TEST nvme_identify 00:30:46.604 ************************************ 00:30:46.604 05:13:16 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:46.604 05:13:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:46.604 05:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:46.604 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:30:46.604 ************************************ 00:30:46.604 START TEST nvme_perf 00:30:46.604 ************************************ 00:30:46.863 05:13:16 -- common/autotest_common.sh@1104 -- # nvme_perf 00:30:46.863 05:13:16 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:48.241 Initializing NVMe Controllers 00:30:48.241 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:48.241 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:48.241 Initialization complete. Launching workers. 00:30:48.241 ======================================================== 00:30:48.241 Latency(us) 00:30:48.241 Device Information : IOPS MiB/s Average min max 00:30:48.241 PCIE (0000:00:06.0) NSID 1 from core 0: 53497.46 626.92 2393.90 879.13 7054.66 00:30:48.241 ======================================================== 00:30:48.241 Total : 53497.46 626.92 2393.90 879.13 7054.66 00:30:48.241 00:30:48.241 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:48.241 ================================================================================= 00:30:48.241 1.00000% : 1452.218us 00:30:48.241 10.00000% : 1675.636us 00:30:48.241 25.00000% : 1936.291us 00:30:48.241 50.00000% : 2383.127us 00:30:48.241 75.00000% : 2800.175us 00:30:48.241 90.00000% : 3098.065us 00:30:48.241 95.00000% : 3336.378us 00:30:48.241 98.00000% : 3634.269us 00:30:48.241 99.00000% : 3783.215us 00:30:48.241 99.50000% : 4081.105us 00:30:48.241 99.90000% : 5868.451us 00:30:48.241 99.99000% : 6911.069us 00:30:48.241 99.99900% : 7060.015us 00:30:48.241 99.99990% : 7060.015us 00:30:48.241 99.99999% : 7060.015us 00:30:48.241 00:30:48.241 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:48.241 ============================================================================== 00:30:48.241 Range in us Cumulative IO count 00:30:48.241 878.778 - 882.502: 0.0019% ( 1) 00:30:48.241 1288.378 - 1295.825: 0.0075% ( 3) 00:30:48.242 1295.825 - 1303.273: 0.0093% ( 1) 00:30:48.242 1310.720 - 1318.167: 0.0131% ( 2) 00:30:48.242 1318.167 - 1325.615: 0.0318% ( 10) 00:30:48.242 1325.615 - 1333.062: 0.0336% ( 1) 00:30:48.242 1333.062 - 1340.509: 0.0486% ( 8) 00:30:48.242 1340.509 - 1347.956: 0.0635% ( 8) 00:30:48.242 1347.956 - 1355.404: 0.0804% ( 9) 00:30:48.242 1355.404 - 1362.851: 0.1047% ( 13) 00:30:48.242 1362.851 - 1370.298: 0.1196% ( 8) 00:30:48.242 1370.298 - 1377.745: 0.1551% ( 19) 00:30:48.242 1377.745 - 1385.193: 0.1981% ( 23) 00:30:48.242 1385.193 - 1392.640: 0.2542% ( 30) 00:30:48.242 1392.640 - 1400.087: 0.3140% ( 32) 00:30:48.242 1400.087 - 1407.535: 0.3719% ( 31) 00:30:48.242 1407.535 - 1414.982: 0.4467% ( 40) 00:30:48.242 1414.982 - 1422.429: 0.5364% ( 48) 00:30:48.242 1422.429 - 1429.876: 0.6317% ( 51) 00:30:48.242 1429.876 - 1437.324: 0.7625% ( 70) 00:30:48.242 1437.324 - 1444.771: 0.8840% ( 65) 00:30:48.242 1444.771 - 1452.218: 1.0541% ( 91) 00:30:48.242 1452.218 - 1459.665: 1.2167% ( 87) 00:30:48.242 1459.665 - 1467.113: 1.4148% ( 106) 00:30:48.242 1467.113 - 1474.560: 1.6111% ( 105) 00:30:48.242 1474.560 - 1482.007: 1.8297% ( 117) 00:30:48.242 1482.007 - 1489.455: 2.0671% ( 127) 00:30:48.242 1489.455 - 1496.902: 2.3119% ( 131) 00:30:48.242 1496.902 - 1504.349: 2.5568% ( 131) 00:30:48.242 1504.349 - 1511.796: 2.8614% ( 163) 00:30:48.242 1511.796 - 1519.244: 3.1324% ( 145) 00:30:48.242 1519.244 - 1526.691: 3.3922% ( 139) 00:30:48.242 1526.691 - 1534.138: 3.6595% ( 143) 00:30:48.242 1534.138 - 1541.585: 3.9417% ( 151) 00:30:48.242 1541.585 - 1549.033: 4.2463% ( 163) 00:30:48.242 1549.033 - 1556.480: 4.5323% ( 153) 00:30:48.242 1556.480 - 1563.927: 4.8500% ( 170) 00:30:48.242 1563.927 - 1571.375: 5.1640% ( 168) 00:30:48.242 1571.375 - 1578.822: 5.4724% ( 165) 00:30:48.242 1578.822 - 1586.269: 5.8331% ( 193) 00:30:48.242 1586.269 - 1593.716: 6.1919% ( 192) 00:30:48.242 1593.716 - 1601.164: 6.5564% ( 195) 00:30:48.242 1601.164 - 1608.611: 6.9395% ( 205) 00:30:48.242 1608.611 - 1616.058: 7.3133% ( 200) 00:30:48.242 1616.058 - 1623.505: 7.6778% ( 195) 00:30:48.242 1623.505 - 1630.953: 8.0553% ( 202) 00:30:48.242 1630.953 - 1638.400: 8.4086% ( 189) 00:30:48.242 1638.400 - 1645.847: 8.8179% ( 219) 00:30:48.242 1645.847 - 1653.295: 9.2141% ( 212) 00:30:48.242 1653.295 - 1660.742: 9.5935% ( 203) 00:30:48.242 1660.742 - 1668.189: 9.9879% ( 211) 00:30:48.242 1668.189 - 1675.636: 10.3897% ( 215) 00:30:48.242 1675.636 - 1683.084: 10.8027% ( 221) 00:30:48.242 1683.084 - 1690.531: 11.2046% ( 215) 00:30:48.242 1690.531 - 1697.978: 11.5989% ( 211) 00:30:48.242 1697.978 - 1705.425: 12.0232% ( 227) 00:30:48.242 1705.425 - 1712.873: 12.3970% ( 200) 00:30:48.242 1712.873 - 1720.320: 12.8268% ( 230) 00:30:48.242 1720.320 - 1727.767: 13.2343% ( 218) 00:30:48.242 1727.767 - 1735.215: 13.6567% ( 226) 00:30:48.242 1735.215 - 1742.662: 14.0772% ( 225) 00:30:48.242 1742.662 - 1750.109: 14.5052% ( 229) 00:30:48.242 1750.109 - 1757.556: 14.9313% ( 228) 00:30:48.242 1757.556 - 1765.004: 15.3612% ( 230) 00:30:48.242 1765.004 - 1772.451: 15.7854% ( 227) 00:30:48.242 1772.451 - 1779.898: 16.2172% ( 231) 00:30:48.242 1779.898 - 1787.345: 16.6508% ( 232) 00:30:48.242 1787.345 - 1794.793: 17.0713% ( 225) 00:30:48.242 1794.793 - 1802.240: 17.5086% ( 234) 00:30:48.242 1802.240 - 1809.687: 17.9385% ( 230) 00:30:48.242 1809.687 - 1817.135: 18.3553% ( 223) 00:30:48.242 1817.135 - 1824.582: 18.7739% ( 224) 00:30:48.242 1824.582 - 1832.029: 19.2001% ( 228) 00:30:48.242 1832.029 - 1839.476: 19.6318% ( 231) 00:30:48.242 1839.476 - 1846.924: 20.0860% ( 243) 00:30:48.242 1846.924 - 1854.371: 20.4990% ( 221) 00:30:48.242 1854.371 - 1861.818: 20.9326% ( 232) 00:30:48.242 1861.818 - 1869.265: 21.3531% ( 225) 00:30:48.242 1869.265 - 1876.713: 21.7905% ( 234) 00:30:48.242 1876.713 - 1884.160: 22.2390% ( 240) 00:30:48.242 1884.160 - 1891.607: 22.6371% ( 213) 00:30:48.242 1891.607 - 1899.055: 23.0932% ( 244) 00:30:48.242 1899.055 - 1906.502: 23.4894% ( 212) 00:30:48.242 1906.502 - 1921.396: 24.3547% ( 463) 00:30:48.242 1921.396 - 1936.291: 25.2014% ( 453) 00:30:48.242 1936.291 - 1951.185: 26.0779% ( 469) 00:30:48.242 1951.185 - 1966.080: 26.9022% ( 441) 00:30:48.242 1966.080 - 1980.975: 27.7638% ( 461) 00:30:48.242 1980.975 - 1995.869: 28.6216% ( 459) 00:30:48.242 1995.869 - 2010.764: 29.4851% ( 462) 00:30:48.242 2010.764 - 2025.658: 30.3317% ( 453) 00:30:48.242 2025.658 - 2040.553: 31.1765% ( 452) 00:30:48.242 2040.553 - 2055.447: 32.0269% ( 455) 00:30:48.242 2055.447 - 2070.342: 32.8904% ( 462) 00:30:48.242 2070.342 - 2085.236: 33.7595% ( 465) 00:30:48.242 2085.236 - 2100.131: 34.6155% ( 458) 00:30:48.242 2100.131 - 2115.025: 35.4752% ( 460) 00:30:48.242 2115.025 - 2129.920: 36.3200% ( 452) 00:30:48.242 2129.920 - 2144.815: 37.1629% ( 451) 00:30:48.242 2144.815 - 2159.709: 38.0077% ( 452) 00:30:48.242 2159.709 - 2174.604: 38.8973% ( 476) 00:30:48.242 2174.604 - 2189.498: 39.7010% ( 430) 00:30:48.242 2189.498 - 2204.393: 40.5532% ( 456) 00:30:48.242 2204.393 - 2219.287: 41.3999% ( 453) 00:30:48.242 2219.287 - 2234.182: 42.2428% ( 451) 00:30:48.242 2234.182 - 2249.076: 43.1063% ( 462) 00:30:48.242 2249.076 - 2263.971: 43.9660% ( 460) 00:30:48.242 2263.971 - 2278.865: 44.8295% ( 462) 00:30:48.242 2278.865 - 2293.760: 45.6593% ( 444) 00:30:48.242 2293.760 - 2308.655: 46.5489% ( 476) 00:30:48.242 2308.655 - 2323.549: 47.3993% ( 455) 00:30:48.242 2323.549 - 2338.444: 48.2422% ( 451) 00:30:48.242 2338.444 - 2353.338: 49.1020% ( 460) 00:30:48.242 2353.338 - 2368.233: 49.9636% ( 461) 00:30:48.242 2368.233 - 2383.127: 50.8326% ( 465) 00:30:48.242 2383.127 - 2398.022: 51.7054% ( 467) 00:30:48.242 2398.022 - 2412.916: 52.5670% ( 461) 00:30:48.242 2412.916 - 2427.811: 53.4212% ( 457) 00:30:48.242 2427.811 - 2442.705: 54.2884% ( 464) 00:30:48.242 2442.705 - 2457.600: 55.1743% ( 474) 00:30:48.242 2457.600 - 2472.495: 56.0359% ( 461) 00:30:48.242 2472.495 - 2487.389: 56.8769% ( 450) 00:30:48.242 2487.389 - 2502.284: 57.7628% ( 474) 00:30:48.242 2502.284 - 2517.178: 58.6338% ( 466) 00:30:48.242 2517.178 - 2532.073: 59.5010% ( 464) 00:30:48.242 2532.073 - 2546.967: 60.3439% ( 451) 00:30:48.242 2546.967 - 2561.862: 61.2279% ( 473) 00:30:48.242 2561.862 - 2576.756: 62.1026% ( 468) 00:30:48.242 2576.756 - 2591.651: 62.9773% ( 468) 00:30:48.242 2591.651 - 2606.545: 63.8183% ( 450) 00:30:48.242 2606.545 - 2621.440: 64.6725% ( 457) 00:30:48.242 2621.440 - 2636.335: 65.5490% ( 469) 00:30:48.242 2636.335 - 2651.229: 66.4218% ( 467) 00:30:48.242 2651.229 - 2666.124: 67.2928% ( 466) 00:30:48.242 2666.124 - 2681.018: 68.1376% ( 452) 00:30:48.242 2681.018 - 2695.913: 68.9917% ( 457) 00:30:48.242 2695.913 - 2710.807: 69.8757% ( 473) 00:30:48.242 2710.807 - 2725.702: 70.7317% ( 458) 00:30:48.242 2725.702 - 2740.596: 71.6176% ( 474) 00:30:48.242 2740.596 - 2755.491: 72.4661% ( 454) 00:30:48.242 2755.491 - 2770.385: 73.3109% ( 452) 00:30:48.242 2770.385 - 2785.280: 74.1987% ( 475) 00:30:48.243 2785.280 - 2800.175: 75.0453% ( 453) 00:30:48.243 2800.175 - 2815.069: 75.8864% ( 450) 00:30:48.243 2815.069 - 2829.964: 76.7349% ( 454) 00:30:48.243 2829.964 - 2844.858: 77.6096% ( 468) 00:30:48.243 2844.858 - 2859.753: 78.5067% ( 480) 00:30:48.243 2859.753 - 2874.647: 79.3272% ( 439) 00:30:48.243 2874.647 - 2889.542: 80.1813% ( 457) 00:30:48.243 2889.542 - 2904.436: 81.0074% ( 442) 00:30:48.243 2904.436 - 2919.331: 81.8727% ( 463) 00:30:48.243 2919.331 - 2934.225: 82.7100% ( 448) 00:30:48.243 2934.225 - 2949.120: 83.5455% ( 447) 00:30:48.243 2949.120 - 2964.015: 84.3865% ( 450) 00:30:48.243 2964.015 - 2978.909: 85.1771% ( 423) 00:30:48.243 2978.909 - 2993.804: 85.9695% ( 424) 00:30:48.243 2993.804 - 3008.698: 86.7190% ( 401) 00:30:48.243 3008.698 - 3023.593: 87.4273% ( 379) 00:30:48.243 3023.593 - 3038.487: 88.0964% ( 358) 00:30:48.243 3038.487 - 3053.382: 88.7356% ( 342) 00:30:48.243 3053.382 - 3068.276: 89.3300% ( 318) 00:30:48.243 3068.276 - 3083.171: 89.8963% ( 303) 00:30:48.243 3083.171 - 3098.065: 90.4289% ( 285) 00:30:48.243 3098.065 - 3112.960: 90.8719% ( 237) 00:30:48.243 3112.960 - 3127.855: 91.2999% ( 229) 00:30:48.243 3127.855 - 3142.749: 91.6998% ( 214) 00:30:48.243 3142.749 - 3157.644: 92.0718% ( 199) 00:30:48.243 3157.644 - 3172.538: 92.4026% ( 177) 00:30:48.243 3172.538 - 3187.433: 92.7184% ( 169) 00:30:48.243 3187.433 - 3202.327: 93.0212% ( 162) 00:30:48.243 3202.327 - 3217.222: 93.3016% ( 150) 00:30:48.243 3217.222 - 3232.116: 93.5744% ( 146) 00:30:48.243 3232.116 - 3247.011: 93.8454% ( 145) 00:30:48.243 3247.011 - 3261.905: 94.0865% ( 129) 00:30:48.243 3261.905 - 3276.800: 94.3127% ( 121) 00:30:48.243 3276.800 - 3291.695: 94.5276% ( 115) 00:30:48.243 3291.695 - 3306.589: 94.7407% ( 114) 00:30:48.243 3306.589 - 3321.484: 94.9519% ( 113) 00:30:48.243 3321.484 - 3336.378: 95.1369% ( 99) 00:30:48.243 3336.378 - 3351.273: 95.3163% ( 96) 00:30:48.243 3351.273 - 3366.167: 95.4845% ( 90) 00:30:48.243 3366.167 - 3381.062: 95.6527% ( 90) 00:30:48.243 3381.062 - 3395.956: 95.8228% ( 91) 00:30:48.243 3395.956 - 3410.851: 95.9798% ( 84) 00:30:48.243 3410.851 - 3425.745: 96.1331% ( 82) 00:30:48.243 3425.745 - 3440.640: 96.2751% ( 76) 00:30:48.243 3440.640 - 3455.535: 96.4321% ( 84) 00:30:48.243 3455.535 - 3470.429: 96.5648% ( 71) 00:30:48.243 3470.429 - 3485.324: 96.7125% ( 79) 00:30:48.243 3485.324 - 3500.218: 96.8601% ( 79) 00:30:48.243 3500.218 - 3515.113: 97.0078% ( 79) 00:30:48.243 3515.113 - 3530.007: 97.1442% ( 73) 00:30:48.243 3530.007 - 3544.902: 97.2806% ( 73) 00:30:48.243 3544.902 - 3559.796: 97.4133% ( 71) 00:30:48.243 3559.796 - 3574.691: 97.5591% ( 78) 00:30:48.243 3574.691 - 3589.585: 97.6862% ( 68) 00:30:48.243 3589.585 - 3604.480: 97.8152% ( 69) 00:30:48.243 3604.480 - 3619.375: 97.9385% ( 66) 00:30:48.243 3619.375 - 3634.269: 98.0637% ( 67) 00:30:48.243 3634.269 - 3649.164: 98.1908% ( 68) 00:30:48.243 3649.164 - 3664.058: 98.3067% ( 62) 00:30:48.243 3664.058 - 3678.953: 98.4244% ( 63) 00:30:48.243 3678.953 - 3693.847: 98.5310% ( 57) 00:30:48.243 3693.847 - 3708.742: 98.6319% ( 54) 00:30:48.243 3708.742 - 3723.636: 98.7347% ( 55) 00:30:48.243 3723.636 - 3738.531: 98.8188% ( 45) 00:30:48.243 3738.531 - 3753.425: 98.8954% ( 41) 00:30:48.243 3753.425 - 3768.320: 98.9552% ( 32) 00:30:48.243 3768.320 - 3783.215: 99.0113% ( 30) 00:30:48.243 3783.215 - 3798.109: 99.0562% ( 24) 00:30:48.243 3798.109 - 3813.004: 99.1029% ( 25) 00:30:48.243 3813.004 - 3842.793: 99.2001% ( 52) 00:30:48.243 3842.793 - 3872.582: 99.2748% ( 40) 00:30:48.243 3872.582 - 3902.371: 99.3402% ( 35) 00:30:48.243 3902.371 - 3932.160: 99.3814% ( 22) 00:30:48.243 3932.160 - 3961.949: 99.4169% ( 19) 00:30:48.243 3961.949 - 3991.738: 99.4430% ( 14) 00:30:48.243 3991.738 - 4021.527: 99.4599% ( 9) 00:30:48.243 4021.527 - 4051.316: 99.4804% ( 11) 00:30:48.243 4051.316 - 4081.105: 99.5010% ( 11) 00:30:48.243 4081.105 - 4110.895: 99.5234% ( 12) 00:30:48.243 4110.895 - 4140.684: 99.5440% ( 11) 00:30:48.243 4140.684 - 4170.473: 99.5608% ( 9) 00:30:48.243 4170.473 - 4200.262: 99.5813% ( 11) 00:30:48.243 4200.262 - 4230.051: 99.6019% ( 11) 00:30:48.243 4230.051 - 4259.840: 99.6262% ( 13) 00:30:48.243 4259.840 - 4289.629: 99.6486% ( 12) 00:30:48.243 4289.629 - 4319.418: 99.6729% ( 13) 00:30:48.243 4319.418 - 4349.207: 99.6935% ( 11) 00:30:48.243 4349.207 - 4378.996: 99.7159% ( 12) 00:30:48.243 4378.996 - 4408.785: 99.7327% ( 9) 00:30:48.243 4408.785 - 4438.575: 99.7477% ( 8) 00:30:48.243 4438.575 - 4468.364: 99.7608% ( 7) 00:30:48.243 4468.364 - 4498.153: 99.7720% ( 6) 00:30:48.243 4498.153 - 4527.942: 99.7776% ( 3) 00:30:48.243 4527.942 - 4557.731: 99.7832% ( 3) 00:30:48.243 4557.731 - 4587.520: 99.7888% ( 3) 00:30:48.243 4587.520 - 4617.309: 99.7944% ( 3) 00:30:48.243 4617.309 - 4647.098: 99.7963% ( 1) 00:30:48.243 4647.098 - 4676.887: 99.8000% ( 2) 00:30:48.243 4676.887 - 4706.676: 99.8019% ( 1) 00:30:48.243 4706.676 - 4736.465: 99.8056% ( 2) 00:30:48.243 4736.465 - 4766.255: 99.8075% ( 1) 00:30:48.243 4766.255 - 4796.044: 99.8094% ( 1) 00:30:48.243 4796.044 - 4825.833: 99.8131% ( 2) 00:30:48.243 4825.833 - 4855.622: 99.8150% ( 1) 00:30:48.243 4855.622 - 4885.411: 99.8187% ( 2) 00:30:48.243 4885.411 - 4915.200: 99.8206% ( 1) 00:30:48.243 4915.200 - 4944.989: 99.8224% ( 1) 00:30:48.243 4944.989 - 4974.778: 99.8262% ( 2) 00:30:48.243 4974.778 - 5004.567: 99.8281% ( 1) 00:30:48.243 5004.567 - 5034.356: 99.8318% ( 2) 00:30:48.243 5064.145 - 5093.935: 99.8337% ( 1) 00:30:48.243 5093.935 - 5123.724: 99.8374% ( 2) 00:30:48.243 5123.724 - 5153.513: 99.8393% ( 1) 00:30:48.243 5153.513 - 5183.302: 99.8430% ( 2) 00:30:48.243 5183.302 - 5213.091: 99.8449% ( 1) 00:30:48.243 5213.091 - 5242.880: 99.8486% ( 2) 00:30:48.243 5242.880 - 5272.669: 99.8505% ( 1) 00:30:48.243 5272.669 - 5302.458: 99.8524% ( 1) 00:30:48.243 5302.458 - 5332.247: 99.8542% ( 1) 00:30:48.243 5332.247 - 5362.036: 99.8580% ( 2) 00:30:48.243 5362.036 - 5391.825: 99.8598% ( 1) 00:30:48.243 5391.825 - 5421.615: 99.8636% ( 2) 00:30:48.243 5421.615 - 5451.404: 99.8654% ( 1) 00:30:48.243 5451.404 - 5481.193: 99.8692% ( 2) 00:30:48.243 5481.193 - 5510.982: 99.8710% ( 1) 00:30:48.243 5510.982 - 5540.771: 99.8729% ( 1) 00:30:48.243 5540.771 - 5570.560: 99.8766% ( 2) 00:30:48.243 5570.560 - 5600.349: 99.8785% ( 1) 00:30:48.243 5600.349 - 5630.138: 99.8804% ( 1) 00:30:48.243 5630.138 - 5659.927: 99.8841% ( 2) 00:30:48.243 5659.927 - 5689.716: 99.8860% ( 1) 00:30:48.243 5689.716 - 5719.505: 99.8897% ( 2) 00:30:48.243 5719.505 - 5749.295: 99.8916% ( 1) 00:30:48.243 5749.295 - 5779.084: 99.8935% ( 1) 00:30:48.243 5779.084 - 5808.873: 99.8953% ( 1) 00:30:48.243 5808.873 - 5838.662: 99.8991% ( 2) 00:30:48.243 5838.662 - 5868.451: 99.9009% ( 1) 00:30:48.243 5868.451 - 5898.240: 99.9028% ( 1) 00:30:48.243 5898.240 - 5928.029: 99.9047% ( 1) 00:30:48.243 5928.029 - 5957.818: 99.9084% ( 2) 00:30:48.243 5957.818 - 5987.607: 99.9103% ( 1) 00:30:48.243 5987.607 - 6017.396: 99.9140% ( 2) 00:30:48.244 6017.396 - 6047.185: 99.9159% ( 1) 00:30:48.244 6047.185 - 6076.975: 99.9196% ( 2) 00:30:48.244 6076.975 - 6106.764: 99.9215% ( 1) 00:30:48.244 6106.764 - 6136.553: 99.9234% ( 1) 00:30:48.244 6136.553 - 6166.342: 99.9271% ( 2) 00:30:48.244 6166.342 - 6196.131: 99.9290% ( 1) 00:30:48.244 6196.131 - 6225.920: 99.9327% ( 2) 00:30:48.244 6225.920 - 6255.709: 99.9346% ( 1) 00:30:48.244 6255.709 - 6285.498: 99.9365% ( 1) 00:30:48.244 6285.498 - 6315.287: 99.9402% ( 2) 00:30:48.244 6315.287 - 6345.076: 99.9421% ( 1) 00:30:48.244 6345.076 - 6374.865: 99.9458% ( 2) 00:30:48.244 6374.865 - 6404.655: 99.9477% ( 1) 00:30:48.244 6404.655 - 6434.444: 99.9495% ( 1) 00:30:48.244 6434.444 - 6464.233: 99.9514% ( 1) 00:30:48.244 6464.233 - 6494.022: 99.9533% ( 1) 00:30:48.244 6494.022 - 6523.811: 99.9570% ( 2) 00:30:48.244 6523.811 - 6553.600: 99.9589% ( 1) 00:30:48.244 6553.600 - 6583.389: 99.9626% ( 2) 00:30:48.244 6583.389 - 6613.178: 99.9645% ( 1) 00:30:48.244 6613.178 - 6642.967: 99.9664% ( 1) 00:30:48.244 6642.967 - 6672.756: 99.9701% ( 2) 00:30:48.244 6672.756 - 6702.545: 99.9720% ( 1) 00:30:48.244 6702.545 - 6732.335: 99.9757% ( 2) 00:30:48.244 6732.335 - 6762.124: 99.9776% ( 1) 00:30:48.244 6762.124 - 6791.913: 99.9794% ( 1) 00:30:48.244 6791.913 - 6821.702: 99.9832% ( 2) 00:30:48.244 6821.702 - 6851.491: 99.9850% ( 1) 00:30:48.244 6851.491 - 6881.280: 99.9888% ( 2) 00:30:48.244 6881.280 - 6911.069: 99.9907% ( 1) 00:30:48.244 6911.069 - 6940.858: 99.9925% ( 1) 00:30:48.244 6940.858 - 6970.647: 99.9944% ( 1) 00:30:48.244 6970.647 - 7000.436: 99.9981% ( 2) 00:30:48.244 7030.225 - 7060.015: 100.0000% ( 1) 00:30:48.244 00:30:48.244 05:13:17 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:49.619 Initializing NVMe Controllers 00:30:49.619 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:49.619 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:49.619 Initialization complete. Launching workers. 00:30:49.619 ======================================================== 00:30:49.619 Latency(us) 00:30:49.619 Device Information : IOPS MiB/s Average min max 00:30:49.619 PCIE (0000:00:06.0) NSID 1 from core 0: 54637.86 640.29 2342.76 1166.55 5346.99 00:30:49.619 ======================================================== 00:30:49.619 Total : 54637.86 640.29 2342.76 1166.55 5346.99 00:30:49.619 00:30:49.620 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:49.620 ================================================================================= 00:30:49.620 1.00000% : 1690.531us 00:30:49.620 10.00000% : 1936.291us 00:30:49.620 25.00000% : 2085.236us 00:30:49.620 50.00000% : 2278.865us 00:30:49.620 75.00000% : 2532.073us 00:30:49.620 90.00000% : 2874.647us 00:30:49.620 95.00000% : 3112.960us 00:30:49.620 98.00000% : 3306.589us 00:30:49.620 99.00000% : 3440.640us 00:30:49.620 99.50000% : 3574.691us 00:30:49.620 99.90000% : 4289.629us 00:30:49.620 99.99000% : 5242.880us 00:30:49.620 99.99900% : 5362.036us 00:30:49.620 99.99990% : 5362.036us 00:30:49.620 99.99999% : 5362.036us 00:30:49.620 00:30:49.620 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:49.620 ============================================================================== 00:30:49.620 Range in us Cumulative IO count 00:30:49.620 1161.775 - 1169.222: 0.0018% ( 1) 00:30:49.620 1228.800 - 1236.247: 0.0055% ( 2) 00:30:49.620 1243.695 - 1251.142: 0.0091% ( 2) 00:30:49.620 1251.142 - 1258.589: 0.0110% ( 1) 00:30:49.620 1258.589 - 1266.036: 0.0128% ( 1) 00:30:49.620 1280.931 - 1288.378: 0.0165% ( 2) 00:30:49.620 1303.273 - 1310.720: 0.0183% ( 1) 00:30:49.620 1310.720 - 1318.167: 0.0201% ( 1) 00:30:49.620 1318.167 - 1325.615: 0.0219% ( 1) 00:30:49.620 1347.956 - 1355.404: 0.0238% ( 1) 00:30:49.620 1355.404 - 1362.851: 0.0274% ( 2) 00:30:49.620 1362.851 - 1370.298: 0.0347% ( 4) 00:30:49.620 1370.298 - 1377.745: 0.0384% ( 2) 00:30:49.620 1377.745 - 1385.193: 0.0402% ( 1) 00:30:49.620 1385.193 - 1392.640: 0.0457% ( 3) 00:30:49.620 1392.640 - 1400.087: 0.0512% ( 3) 00:30:49.620 1400.087 - 1407.535: 0.0585% ( 4) 00:30:49.620 1407.535 - 1414.982: 0.0640% ( 3) 00:30:49.620 1422.429 - 1429.876: 0.0677% ( 2) 00:30:49.620 1437.324 - 1444.771: 0.0786% ( 6) 00:30:49.620 1444.771 - 1452.218: 0.0823% ( 2) 00:30:49.620 1452.218 - 1459.665: 0.0933% ( 6) 00:30:49.620 1459.665 - 1467.113: 0.0988% ( 3) 00:30:49.620 1467.113 - 1474.560: 0.1079% ( 5) 00:30:49.620 1474.560 - 1482.007: 0.1152% ( 4) 00:30:49.620 1482.007 - 1489.455: 0.1262% ( 6) 00:30:49.620 1489.455 - 1496.902: 0.1372% ( 6) 00:30:49.620 1496.902 - 1504.349: 0.1445% ( 4) 00:30:49.620 1504.349 - 1511.796: 0.1609% ( 9) 00:30:49.620 1511.796 - 1519.244: 0.1792% ( 10) 00:30:49.620 1519.244 - 1526.691: 0.1902% ( 6) 00:30:49.620 1526.691 - 1534.138: 0.2030% ( 7) 00:30:49.620 1534.138 - 1541.585: 0.2121% ( 5) 00:30:49.620 1541.585 - 1549.033: 0.2231% ( 6) 00:30:49.620 1549.033 - 1556.480: 0.2359% ( 7) 00:30:49.620 1556.480 - 1563.927: 0.2506% ( 8) 00:30:49.620 1563.927 - 1571.375: 0.2688% ( 10) 00:30:49.620 1571.375 - 1578.822: 0.2908% ( 12) 00:30:49.620 1578.822 - 1586.269: 0.3146% ( 13) 00:30:49.620 1586.269 - 1593.716: 0.3438% ( 16) 00:30:49.620 1593.716 - 1601.164: 0.3749% ( 17) 00:30:49.620 1601.164 - 1608.611: 0.4097% ( 19) 00:30:49.620 1608.611 - 1616.058: 0.4517% ( 23) 00:30:49.620 1616.058 - 1623.505: 0.4865% ( 19) 00:30:49.620 1623.505 - 1630.953: 0.5395% ( 29) 00:30:49.620 1630.953 - 1638.400: 0.5743% ( 19) 00:30:49.620 1638.400 - 1645.847: 0.6291% ( 30) 00:30:49.620 1645.847 - 1653.295: 0.6785% ( 27) 00:30:49.620 1653.295 - 1660.742: 0.7279% ( 27) 00:30:49.620 1660.742 - 1668.189: 0.8029% ( 41) 00:30:49.620 1668.189 - 1675.636: 0.8833% ( 44) 00:30:49.620 1675.636 - 1683.084: 0.9528% ( 38) 00:30:49.620 1683.084 - 1690.531: 1.0443% ( 50) 00:30:49.620 1690.531 - 1697.978: 1.1357% ( 50) 00:30:49.620 1697.978 - 1705.425: 1.2528% ( 64) 00:30:49.620 1705.425 - 1712.873: 1.3643% ( 61) 00:30:49.620 1712.873 - 1720.320: 1.4887% ( 68) 00:30:49.620 1720.320 - 1727.767: 1.6222% ( 73) 00:30:49.620 1727.767 - 1735.215: 1.7850% ( 89) 00:30:49.620 1735.215 - 1742.662: 1.9313% ( 80) 00:30:49.620 1742.662 - 1750.109: 2.1068% ( 96) 00:30:49.620 1750.109 - 1757.556: 2.3556% ( 136) 00:30:49.620 1757.556 - 1765.004: 2.5055% ( 82) 00:30:49.620 1765.004 - 1772.451: 2.6738% ( 92) 00:30:49.620 1772.451 - 1779.898: 2.8622% ( 103) 00:30:49.620 1779.898 - 1787.345: 3.0688% ( 113) 00:30:49.620 1787.345 - 1794.793: 3.2974% ( 125) 00:30:49.620 1794.793 - 1802.240: 3.6138% ( 173) 00:30:49.620 1802.240 - 1809.687: 3.8625% ( 136) 00:30:49.620 1809.687 - 1817.135: 4.1570% ( 161) 00:30:49.620 1817.135 - 1824.582: 4.4478% ( 159) 00:30:49.620 1824.582 - 1832.029: 4.6837% ( 129) 00:30:49.620 1832.029 - 1839.476: 4.9836% ( 164) 00:30:49.620 1839.476 - 1846.924: 5.3384% ( 194) 00:30:49.620 1846.924 - 1854.371: 5.6567% ( 174) 00:30:49.620 1854.371 - 1861.818: 5.9365% ( 153) 00:30:49.620 1861.818 - 1869.265: 6.2327% ( 162) 00:30:49.620 1869.265 - 1876.713: 6.5619% ( 180) 00:30:49.620 1876.713 - 1884.160: 6.9789% ( 228) 00:30:49.620 1884.160 - 1891.607: 7.3630% ( 210) 00:30:49.620 1891.607 - 1899.055: 7.7086% ( 189) 00:30:49.620 1899.055 - 1906.502: 8.1677% ( 251) 00:30:49.620 1906.502 - 1921.396: 9.1296% ( 526) 00:30:49.620 1921.396 - 1936.291: 10.1611% ( 564) 00:30:49.620 1936.291 - 1951.185: 11.2456% ( 593) 00:30:49.620 1951.185 - 1966.080: 12.4179% ( 641) 00:30:49.620 1966.080 - 1980.975: 13.7987% ( 755) 00:30:49.620 1980.975 - 1995.869: 15.1740% ( 752) 00:30:49.620 1995.869 - 2010.764: 16.8310% ( 906) 00:30:49.620 2010.764 - 2025.658: 18.4074% ( 862) 00:30:49.620 2025.658 - 2040.553: 20.0552% ( 901) 00:30:49.620 2040.553 - 2055.447: 21.8274% ( 969) 00:30:49.620 2055.447 - 2070.342: 23.8044% ( 1081) 00:30:49.620 2070.342 - 2085.236: 25.6534% ( 1011) 00:30:49.620 2085.236 - 2100.131: 27.3487% ( 927) 00:30:49.620 2100.131 - 2115.025: 29.0258% ( 917) 00:30:49.620 2115.025 - 2129.920: 31.0777% ( 1122) 00:30:49.620 2129.920 - 2144.815: 33.0602% ( 1084) 00:30:49.620 2144.815 - 2159.709: 35.3664% ( 1261) 00:30:49.620 2159.709 - 2174.604: 37.2264% ( 1017) 00:30:49.620 2174.604 - 2189.498: 39.3094% ( 1139) 00:30:49.620 2189.498 - 2204.393: 41.2882% ( 1082) 00:30:49.620 2204.393 - 2219.287: 43.2835% ( 1091) 00:30:49.620 2219.287 - 2234.182: 45.2770% ( 1090) 00:30:49.620 2234.182 - 2249.076: 47.2028% ( 1053) 00:30:49.620 2249.076 - 2263.971: 48.9201% ( 939) 00:30:49.620 2263.971 - 2278.865: 50.6282% ( 934) 00:30:49.620 2278.865 - 2293.760: 52.1864% ( 852) 00:30:49.620 2293.760 - 2308.655: 53.6513% ( 801) 00:30:49.620 2308.655 - 2323.549: 55.1839% ( 838) 00:30:49.620 2323.549 - 2338.444: 56.7549% ( 859) 00:30:49.620 2338.444 - 2353.338: 58.5545% ( 984) 00:30:49.620 2353.338 - 2368.233: 60.1511% ( 873) 00:30:49.620 2368.233 - 2383.127: 61.6398% ( 814) 00:30:49.620 2383.127 - 2398.022: 63.1120% ( 805) 00:30:49.620 2398.022 - 2412.916: 64.4598% ( 737) 00:30:49.620 2412.916 - 2427.811: 65.8699% ( 771) 00:30:49.620 2427.811 - 2442.705: 67.2086% ( 732) 00:30:49.620 2442.705 - 2457.600: 68.5108% ( 712) 00:30:49.620 2457.600 - 2472.495: 69.8678% ( 742) 00:30:49.620 2472.495 - 2487.389: 71.2248% ( 742) 00:30:49.620 2487.389 - 2502.284: 72.6202% ( 763) 00:30:49.620 2502.284 - 2517.178: 73.9571% ( 731) 00:30:49.620 2517.178 - 2532.073: 75.1806% ( 669) 00:30:49.620 2532.073 - 2546.967: 76.5029% ( 723) 00:30:49.620 2546.967 - 2561.862: 77.6825% ( 645) 00:30:49.620 2561.862 - 2576.756: 78.6390% ( 523) 00:30:49.620 2576.756 - 2591.651: 79.5991% ( 525) 00:30:49.620 2591.651 - 2606.545: 80.3947% ( 435) 00:30:49.620 2606.545 - 2621.440: 81.1756% ( 427) 00:30:49.620 2621.440 - 2636.335: 81.8797% ( 385) 00:30:49.620 2636.335 - 2651.229: 82.5747% ( 380) 00:30:49.620 2651.229 - 2666.124: 83.2696% ( 380) 00:30:49.620 2666.124 - 2681.018: 83.8658% ( 326) 00:30:49.620 2681.018 - 2695.913: 84.4822% ( 337) 00:30:49.620 2695.913 - 2710.807: 85.1314% ( 355) 00:30:49.620 2710.807 - 2725.702: 85.6380% ( 277) 00:30:49.620 2725.702 - 2740.596: 86.1263% ( 267) 00:30:49.620 2740.596 - 2755.491: 86.6292% ( 275) 00:30:49.620 2755.491 - 2770.385: 87.1212% ( 269) 00:30:49.620 2770.385 - 2785.280: 87.5437% ( 231) 00:30:49.620 2785.280 - 2800.175: 87.9661% ( 231) 00:30:49.620 2800.175 - 2815.069: 88.4160% ( 246) 00:30:49.620 2815.069 - 2829.964: 88.8293% ( 226) 00:30:49.620 2829.964 - 2844.858: 89.2427% ( 226) 00:30:49.620 2844.858 - 2859.753: 89.6231% ( 208) 00:30:49.620 2859.753 - 2874.647: 90.0218% ( 218) 00:30:49.620 2874.647 - 2889.542: 90.3912% ( 202) 00:30:49.620 2889.542 - 2904.436: 90.7753% ( 210) 00:30:49.620 2904.436 - 2919.331: 91.1502% ( 205) 00:30:49.620 2919.331 - 2934.225: 91.5050% ( 194) 00:30:49.620 2934.225 - 2949.120: 91.8506% ( 189) 00:30:49.620 2949.120 - 2964.015: 92.1835% ( 182) 00:30:49.620 2964.015 - 2978.909: 92.5364% ( 193) 00:30:49.620 2978.909 - 2993.804: 92.8565% ( 175) 00:30:49.620 2993.804 - 3008.698: 93.1930% ( 184) 00:30:49.620 3008.698 - 3023.593: 93.5112% ( 174) 00:30:49.620 3023.593 - 3038.487: 93.8148% ( 166) 00:30:49.620 3038.487 - 3053.382: 94.1294% ( 172) 00:30:49.620 3053.382 - 3068.276: 94.4128% ( 155) 00:30:49.620 3068.276 - 3083.171: 94.6927% ( 153) 00:30:49.620 3083.171 - 3098.065: 94.9816% ( 158) 00:30:49.620 3098.065 - 3112.960: 95.2505% ( 147) 00:30:49.620 3112.960 - 3127.855: 95.5083% ( 141) 00:30:49.620 3127.855 - 3142.749: 95.7753% ( 146) 00:30:49.620 3142.749 - 3157.644: 96.0387% ( 144) 00:30:49.620 3157.644 - 3172.538: 96.2746% ( 129) 00:30:49.620 3172.538 - 3187.433: 96.5343% ( 142) 00:30:49.620 3187.433 - 3202.327: 96.7574% ( 122) 00:30:49.620 3202.327 - 3217.222: 96.9842% ( 124) 00:30:49.620 3217.222 - 3232.116: 97.2055% ( 121) 00:30:49.621 3232.116 - 3247.011: 97.3994% ( 106) 00:30:49.621 3247.011 - 3261.905: 97.5859% ( 102) 00:30:49.621 3261.905 - 3276.800: 97.7597% ( 95) 00:30:49.621 3276.800 - 3291.695: 97.9206% ( 88) 00:30:49.621 3291.695 - 3306.589: 98.0742% ( 84) 00:30:49.621 3306.589 - 3321.484: 98.1986% ( 68) 00:30:49.621 3321.484 - 3336.378: 98.3211% ( 67) 00:30:49.621 3336.378 - 3351.273: 98.4400% ( 65) 00:30:49.621 3351.273 - 3366.167: 98.5515% ( 61) 00:30:49.621 3366.167 - 3381.062: 98.6540% ( 56) 00:30:49.621 3381.062 - 3395.956: 98.7619% ( 59) 00:30:49.621 3395.956 - 3410.851: 98.8496% ( 48) 00:30:49.621 3410.851 - 3425.745: 98.9484% ( 54) 00:30:49.621 3425.745 - 3440.640: 99.0344% ( 47) 00:30:49.621 3440.640 - 3455.535: 99.1075% ( 40) 00:30:49.621 3455.535 - 3470.429: 99.1807% ( 40) 00:30:49.621 3470.429 - 3485.324: 99.2538% ( 40) 00:30:49.621 3485.324 - 3500.218: 99.3215% ( 37) 00:30:49.621 3500.218 - 3515.113: 99.3727% ( 28) 00:30:49.621 3515.113 - 3530.007: 99.4148% ( 23) 00:30:49.621 3530.007 - 3544.902: 99.4440% ( 16) 00:30:49.621 3544.902 - 3559.796: 99.4806% ( 20) 00:30:49.621 3559.796 - 3574.691: 99.5062% ( 14) 00:30:49.621 3574.691 - 3589.585: 99.5282% ( 12) 00:30:49.621 3589.585 - 3604.480: 99.5519% ( 13) 00:30:49.621 3604.480 - 3619.375: 99.5684% ( 9) 00:30:49.621 3619.375 - 3634.269: 99.5922% ( 13) 00:30:49.621 3634.269 - 3649.164: 99.6105% ( 10) 00:30:49.621 3649.164 - 3664.058: 99.6251% ( 8) 00:30:49.621 3664.058 - 3678.953: 99.6415% ( 9) 00:30:49.621 3678.953 - 3693.847: 99.6543% ( 7) 00:30:49.621 3693.847 - 3708.742: 99.6671% ( 7) 00:30:49.621 3708.742 - 3723.636: 99.6800% ( 7) 00:30:49.621 3723.636 - 3738.531: 99.6891% ( 5) 00:30:49.621 3738.531 - 3753.425: 99.6964% ( 4) 00:30:49.621 3753.425 - 3768.320: 99.7019% ( 3) 00:30:49.621 3768.320 - 3783.215: 99.7074% ( 3) 00:30:49.621 3783.215 - 3798.109: 99.7110% ( 2) 00:30:49.621 3798.109 - 3813.004: 99.7220% ( 6) 00:30:49.621 3813.004 - 3842.793: 99.7366% ( 8) 00:30:49.621 3842.793 - 3872.582: 99.7513% ( 8) 00:30:49.621 3872.582 - 3902.371: 99.7659% ( 8) 00:30:49.621 3902.371 - 3932.160: 99.7824% ( 9) 00:30:49.621 3932.160 - 3961.949: 99.7988% ( 9) 00:30:49.621 3961.949 - 3991.738: 99.8135% ( 8) 00:30:49.621 3991.738 - 4021.527: 99.8281% ( 8) 00:30:49.621 4021.527 - 4051.316: 99.8372% ( 5) 00:30:49.621 4051.316 - 4081.105: 99.8464% ( 5) 00:30:49.621 4081.105 - 4110.895: 99.8555% ( 5) 00:30:49.621 4110.895 - 4140.684: 99.8665% ( 6) 00:30:49.621 4140.684 - 4170.473: 99.8756% ( 5) 00:30:49.621 4170.473 - 4200.262: 99.8848% ( 5) 00:30:49.621 4200.262 - 4230.051: 99.8921% ( 4) 00:30:49.621 4230.051 - 4259.840: 99.8994% ( 4) 00:30:49.621 4259.840 - 4289.629: 99.9067% ( 4) 00:30:49.621 4289.629 - 4319.418: 99.9122% ( 3) 00:30:49.621 4319.418 - 4349.207: 99.9140% ( 1) 00:30:49.621 4349.207 - 4378.996: 99.9159% ( 1) 00:30:49.621 4378.996 - 4408.785: 99.9195% ( 2) 00:30:49.621 4408.785 - 4438.575: 99.9232% ( 2) 00:30:49.621 4438.575 - 4468.364: 99.9268% ( 2) 00:30:49.621 4468.364 - 4498.153: 99.9305% ( 2) 00:30:49.621 4498.153 - 4527.942: 99.9342% ( 2) 00:30:49.621 4557.731 - 4587.520: 99.9378% ( 2) 00:30:49.621 4587.520 - 4617.309: 99.9433% ( 3) 00:30:49.621 4617.309 - 4647.098: 99.9451% ( 1) 00:30:49.621 4647.098 - 4676.887: 99.9506% ( 3) 00:30:49.621 4676.887 - 4706.676: 99.9524% ( 1) 00:30:49.621 4706.676 - 4736.465: 99.9543% ( 1) 00:30:49.621 4736.465 - 4766.255: 99.9579% ( 2) 00:30:49.621 4766.255 - 4796.044: 99.9598% ( 1) 00:30:49.621 4796.044 - 4825.833: 99.9634% ( 2) 00:30:49.621 4825.833 - 4855.622: 99.9653% ( 1) 00:30:49.621 4855.622 - 4885.411: 99.9671% ( 1) 00:30:49.621 4885.411 - 4915.200: 99.9689% ( 1) 00:30:49.621 4944.989 - 4974.778: 99.9726% ( 2) 00:30:49.621 5004.567 - 5034.356: 99.9762% ( 2) 00:30:49.621 5034.356 - 5064.145: 99.9781% ( 1) 00:30:49.621 5064.145 - 5093.935: 99.9799% ( 1) 00:30:49.621 5093.935 - 5123.724: 99.9817% ( 1) 00:30:49.621 5123.724 - 5153.513: 99.9854% ( 2) 00:30:49.621 5153.513 - 5183.302: 99.9872% ( 1) 00:30:49.621 5183.302 - 5213.091: 99.9890% ( 1) 00:30:49.621 5213.091 - 5242.880: 99.9927% ( 2) 00:30:49.621 5242.880 - 5272.669: 99.9945% ( 1) 00:30:49.621 5272.669 - 5302.458: 99.9982% ( 2) 00:30:49.621 5332.247 - 5362.036: 100.0000% ( 1) 00:30:49.621 00:30:49.621 05:13:19 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:49.621 00:30:49.621 real 0m2.636s 00:30:49.621 user 0m2.224s 00:30:49.621 sys 0m0.246s 00:30:49.621 05:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.621 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:49.621 ************************************ 00:30:49.621 END TEST nvme_perf 00:30:49.621 ************************************ 00:30:49.621 05:13:19 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:49.621 05:13:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:49.621 05:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.621 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:49.621 ************************************ 00:30:49.621 START TEST nvme_hello_world 00:30:49.621 ************************************ 00:30:49.621 05:13:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:49.621 Initializing NVMe Controllers 00:30:49.621 Attached to 0000:00:06.0 00:30:49.621 Namespace ID: 1 size: 5GB 00:30:49.621 Initialization complete. 00:30:49.621 INFO: using host memory buffer for IO 00:30:49.621 Hello world! 00:30:49.621 00:30:49.621 real 0m0.258s 00:30:49.621 user 0m0.073s 00:30:49.621 sys 0m0.113s 00:30:49.621 05:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.621 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:49.621 ************************************ 00:30:49.621 END TEST nvme_hello_world 00:30:49.621 ************************************ 00:30:49.621 05:13:19 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:49.621 05:13:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:49.621 05:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.621 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:49.621 ************************************ 00:30:49.621 START TEST nvme_sgl 00:30:49.621 ************************************ 00:30:49.621 05:13:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:49.880 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:49.880 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:49.880 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:49.880 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:49.880 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:49.880 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:50.139 NVMe Readv/Writev Request test 00:30:50.139 Attached to 0000:00:06.0 00:30:50.139 0000:00:06.0: build_io_request_2 test passed 00:30:50.139 0000:00:06.0: build_io_request_4 test passed 00:30:50.139 0000:00:06.0: build_io_request_5 test passed 00:30:50.139 0000:00:06.0: build_io_request_6 test passed 00:30:50.139 0000:00:06.0: build_io_request_7 test passed 00:30:50.139 0000:00:06.0: build_io_request_10 test passed 00:30:50.139 Cleaning up... 00:30:50.139 00:30:50.139 real 0m0.323s 00:30:50.139 user 0m0.120s 00:30:50.139 sys 0m0.124s 00:30:50.139 05:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.139 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:50.139 ************************************ 00:30:50.139 END TEST nvme_sgl 00:30:50.139 ************************************ 00:30:50.139 05:13:19 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:50.139 05:13:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:50.139 05:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.139 05:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:50.139 ************************************ 00:30:50.139 START TEST nvme_e2edp 00:30:50.139 ************************************ 00:30:50.139 05:13:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:50.397 NVMe Write/Read with End-to-End data protection test 00:30:50.397 Attached to 0000:00:06.0 00:30:50.397 Cleaning up... 00:30:50.397 00:30:50.397 real 0m0.279s 00:30:50.397 user 0m0.106s 00:30:50.397 sys 0m0.109s 00:30:50.397 05:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.397 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:50.397 ************************************ 00:30:50.397 END TEST nvme_e2edp 00:30:50.397 ************************************ 00:30:50.397 05:13:20 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:50.397 05:13:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:50.397 05:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.397 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:50.397 ************************************ 00:30:50.397 START TEST nvme_reserve 00:30:50.397 ************************************ 00:30:50.397 05:13:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:50.656 ===================================================== 00:30:50.656 NVMe Controller at PCI bus 0, device 6, function 0 00:30:50.656 ===================================================== 00:30:50.656 Reservations: Not Supported 00:30:50.656 Reservation test passed 00:30:50.656 00:30:50.656 real 0m0.304s 00:30:50.656 user 0m0.086s 00:30:50.656 sys 0m0.130s 00:30:50.656 05:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.656 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:50.656 ************************************ 00:30:50.656 END TEST nvme_reserve 00:30:50.656 ************************************ 00:30:50.656 05:13:20 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:50.656 05:13:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:50.656 05:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:50.656 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:50.656 ************************************ 00:30:50.656 START TEST nvme_err_injection 00:30:50.656 ************************************ 00:30:50.656 05:13:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:51.241 NVMe Error Injection test 00:30:51.241 Attached to 0000:00:06.0 00:30:51.241 0000:00:06.0: get features failed as expected 00:30:51.241 0000:00:06.0: get features successfully as expected 00:30:51.241 0000:00:06.0: read failed as expected 00:30:51.241 0000:00:06.0: read successfully as expected 00:30:51.241 Cleaning up... 00:30:51.241 00:30:51.241 real 0m0.306s 00:30:51.241 user 0m0.096s 00:30:51.241 sys 0m0.133s 00:30:51.241 05:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.241 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:51.241 ************************************ 00:30:51.241 END TEST nvme_err_injection 00:30:51.241 ************************************ 00:30:51.241 05:13:20 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:51.241 05:13:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:51.241 05:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.241 05:13:20 -- common/autotest_common.sh@10 -- # set +x 00:30:51.241 ************************************ 00:30:51.241 START TEST nvme_overhead 00:30:51.241 ************************************ 00:30:51.241 05:13:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:52.631 Initializing NVMe Controllers 00:30:52.631 Attached to 0000:00:06.0 00:30:52.631 Initialization complete. Launching workers. 00:30:52.631 submit (in ns) avg, min, max = 14622.8, 11700.0, 60232.3 00:30:52.631 complete (in ns) avg, min, max = 11147.5, 7924.5, 482714.5 00:30:52.631 00:30:52.631 Submit histogram 00:30:52.631 ================ 00:30:52.631 Range in us Cumulative Count 00:30:52.631 11.695 - 11.753: 0.1885% ( 15) 00:30:52.631 11.753 - 11.811: 0.6786% ( 39) 00:30:52.631 11.811 - 11.869: 1.1939% ( 41) 00:30:52.631 11.869 - 11.927: 1.6086% ( 33) 00:30:52.631 11.927 - 11.985: 1.9228% ( 25) 00:30:52.631 11.985 - 12.044: 3.7577% ( 146) 00:30:52.631 12.044 - 12.102: 7.3143% ( 283) 00:30:52.631 12.102 - 12.160: 10.9589% ( 290) 00:30:52.631 12.160 - 12.218: 13.0451% ( 166) 00:30:52.631 12.218 - 12.276: 14.3396% ( 103) 00:30:52.631 12.276 - 12.335: 16.0488% ( 136) 00:30:52.631 12.335 - 12.393: 20.6736% ( 368) 00:30:52.631 12.393 - 12.451: 27.0831% ( 510) 00:30:52.631 12.451 - 12.509: 31.8462% ( 379) 00:30:52.631 12.509 - 12.567: 34.6613% ( 224) 00:30:52.631 12.567 - 12.625: 36.0060% ( 107) 00:30:52.631 12.625 - 12.684: 38.3185% ( 184) 00:30:52.631 12.684 - 12.742: 42.4658% ( 330) 00:30:52.631 12.742 - 12.800: 46.4874% ( 320) 00:30:52.631 12.800 - 12.858: 49.2899% ( 223) 00:30:52.631 12.858 - 12.916: 50.8232% ( 122) 00:30:52.631 12.916 - 12.975: 51.8411% ( 81) 00:30:52.631 12.975 - 13.033: 53.4121% ( 125) 00:30:52.631 13.033 - 13.091: 56.5163% ( 247) 00:30:52.631 13.091 - 13.149: 60.0101% ( 278) 00:30:52.631 13.149 - 13.207: 62.9383% ( 233) 00:30:52.631 13.207 - 13.265: 65.2507% ( 184) 00:30:52.631 13.265 - 13.324: 67.2615% ( 160) 00:30:52.631 13.324 - 13.382: 68.4429% ( 94) 00:30:52.631 13.382 - 13.440: 69.4609% ( 81) 00:30:52.631 13.440 - 13.498: 70.5668% ( 88) 00:30:52.631 13.498 - 13.556: 71.7607% ( 95) 00:30:52.631 13.556 - 13.615: 72.7787% ( 81) 00:30:52.631 13.615 - 13.673: 74.1360% ( 108) 00:30:52.631 13.673 - 13.731: 75.0911% ( 76) 00:30:52.631 13.731 - 13.789: 75.7446% ( 52) 00:30:52.631 13.789 - 13.847: 76.1217% ( 30) 00:30:52.631 13.847 - 13.905: 76.5238% ( 32) 00:30:52.631 13.905 - 13.964: 77.4161% ( 71) 00:30:52.631 13.964 - 14.022: 78.8362% ( 113) 00:30:52.631 14.022 - 14.080: 80.6711% ( 146) 00:30:52.631 14.080 - 14.138: 81.9279% ( 100) 00:30:52.631 14.138 - 14.196: 82.7071% ( 62) 00:30:52.631 14.196 - 14.255: 83.1972% ( 39) 00:30:52.631 14.255 - 14.313: 83.7250% ( 42) 00:30:52.631 14.313 - 14.371: 84.2152% ( 39) 00:30:52.631 14.371 - 14.429: 84.7179% ( 40) 00:30:52.631 14.429 - 14.487: 85.0697% ( 28) 00:30:52.631 14.487 - 14.545: 85.4970% ( 34) 00:30:52.631 14.545 - 14.604: 85.8112% ( 25) 00:30:52.631 14.604 - 14.662: 85.9243% ( 9) 00:30:52.631 14.662 - 14.720: 86.0877% ( 13) 00:30:52.631 14.720 - 14.778: 86.1757% ( 7) 00:30:52.631 14.778 - 14.836: 86.3265% ( 12) 00:30:52.631 14.836 - 14.895: 86.4019% ( 6) 00:30:52.631 14.895 - 15.011: 86.5527% ( 12) 00:30:52.631 15.011 - 15.127: 86.6784% ( 10) 00:30:52.631 15.127 - 15.244: 86.7664% ( 7) 00:30:52.631 15.244 - 15.360: 86.8795% ( 9) 00:30:52.631 15.360 - 15.476: 86.9423% ( 5) 00:30:52.631 15.476 - 15.593: 86.9926% ( 4) 00:30:52.631 15.593 - 15.709: 87.1057% ( 9) 00:30:52.631 15.709 - 15.825: 87.1937% ( 7) 00:30:52.631 15.825 - 15.942: 87.2439% ( 4) 00:30:52.631 15.942 - 16.058: 87.2816% ( 3) 00:30:52.631 16.058 - 16.175: 87.3822% ( 8) 00:30:52.631 16.175 - 16.291: 87.4324% ( 4) 00:30:52.631 16.291 - 16.407: 87.4702% ( 3) 00:30:52.631 16.407 - 16.524: 87.5456% ( 6) 00:30:52.631 16.524 - 16.640: 87.5833% ( 3) 00:30:52.631 16.640 - 16.756: 87.6335% ( 4) 00:30:52.631 16.756 - 16.873: 87.6712% ( 3) 00:30:52.631 16.873 - 16.989: 87.7718% ( 8) 00:30:52.631 16.989 - 17.105: 87.8095% ( 3) 00:30:52.631 17.105 - 17.222: 87.8346% ( 2) 00:30:52.631 17.222 - 17.338: 87.8974% ( 5) 00:30:52.631 17.338 - 17.455: 87.9477% ( 4) 00:30:52.631 17.455 - 17.571: 87.9854% ( 3) 00:30:52.631 17.571 - 17.687: 88.0734% ( 7) 00:30:52.631 17.687 - 17.804: 88.0985% ( 2) 00:30:52.631 17.804 - 17.920: 88.1739% ( 6) 00:30:52.631 17.920 - 18.036: 88.2242% ( 4) 00:30:52.631 18.153 - 18.269: 88.2745% ( 4) 00:30:52.631 18.269 - 18.385: 88.2996% ( 2) 00:30:52.631 18.385 - 18.502: 88.3499% ( 4) 00:30:52.632 18.502 - 18.618: 88.3876% ( 3) 00:30:52.632 18.618 - 18.735: 88.4630% ( 6) 00:30:52.632 18.735 - 18.851: 88.5761% ( 9) 00:30:52.632 18.851 - 18.967: 88.6264% ( 4) 00:30:52.632 18.967 - 19.084: 88.6892% ( 5) 00:30:52.632 19.084 - 19.200: 88.7772% ( 7) 00:30:52.632 19.200 - 19.316: 88.8149% ( 3) 00:30:52.632 19.316 - 19.433: 88.9531% ( 11) 00:30:52.632 19.433 - 19.549: 89.0285% ( 6) 00:30:52.632 19.549 - 19.665: 89.0662% ( 3) 00:30:52.632 19.665 - 19.782: 89.1291% ( 5) 00:30:52.632 19.782 - 19.898: 89.1542% ( 2) 00:30:52.632 19.898 - 20.015: 89.2547% ( 8) 00:30:52.632 20.015 - 20.131: 89.3301% ( 6) 00:30:52.632 20.131 - 20.247: 89.4181% ( 7) 00:30:52.632 20.247 - 20.364: 89.4433% ( 2) 00:30:52.632 20.364 - 20.480: 89.4810% ( 3) 00:30:52.632 20.480 - 20.596: 89.4935% ( 1) 00:30:52.632 20.596 - 20.713: 89.5187% ( 2) 00:30:52.632 20.713 - 20.829: 89.6318% ( 9) 00:30:52.632 20.829 - 20.945: 89.7323% ( 8) 00:30:52.632 20.945 - 21.062: 89.8077% ( 6) 00:30:52.632 21.062 - 21.178: 89.8454% ( 3) 00:30:52.632 21.178 - 21.295: 89.8957% ( 4) 00:30:52.632 21.295 - 21.411: 89.9837% ( 7) 00:30:52.632 21.411 - 21.527: 90.0465% ( 5) 00:30:52.632 21.527 - 21.644: 90.0842% ( 3) 00:30:52.632 21.644 - 21.760: 90.1219% ( 3) 00:30:52.632 21.876 - 21.993: 90.1470% ( 2) 00:30:52.632 21.993 - 22.109: 90.1973% ( 4) 00:30:52.632 22.109 - 22.225: 90.2476% ( 4) 00:30:52.632 22.225 - 22.342: 90.2979% ( 4) 00:30:52.632 22.342 - 22.458: 90.3481% ( 4) 00:30:52.632 22.458 - 22.575: 90.3858% ( 3) 00:30:52.632 22.575 - 22.691: 90.4235% ( 3) 00:30:52.632 22.691 - 22.807: 90.4989% ( 6) 00:30:52.632 22.924 - 23.040: 90.5241% ( 2) 00:30:52.632 23.040 - 23.156: 90.5366% ( 1) 00:30:52.632 23.156 - 23.273: 90.5743% ( 3) 00:30:52.632 23.273 - 23.389: 90.5869% ( 1) 00:30:52.632 23.389 - 23.505: 90.5995% ( 1) 00:30:52.632 23.505 - 23.622: 90.6120% ( 1) 00:30:52.632 23.622 - 23.738: 90.6246% ( 1) 00:30:52.632 23.738 - 23.855: 90.6623% ( 3) 00:30:52.632 23.855 - 23.971: 90.7000% ( 3) 00:30:52.632 23.971 - 24.087: 90.7126% ( 1) 00:30:52.632 24.204 - 24.320: 90.7251% ( 1) 00:30:52.632 24.320 - 24.436: 90.7377% ( 1) 00:30:52.632 24.553 - 24.669: 90.7629% ( 2) 00:30:52.632 24.669 - 24.785: 90.7754% ( 1) 00:30:52.632 25.949 - 26.065: 90.7880% ( 1) 00:30:52.632 26.298 - 26.415: 90.8006% ( 1) 00:30:52.632 26.415 - 26.531: 90.8257% ( 2) 00:30:52.632 26.647 - 26.764: 90.8885% ( 5) 00:30:52.632 26.764 - 26.880: 91.0770% ( 15) 00:30:52.632 26.880 - 26.996: 91.2907% ( 17) 00:30:52.632 26.996 - 27.113: 91.4666% ( 14) 00:30:52.632 27.113 - 27.229: 91.8562% ( 31) 00:30:52.632 27.229 - 27.345: 92.4972% ( 51) 00:30:52.632 27.345 - 27.462: 93.0250% ( 42) 00:30:52.632 27.462 - 27.578: 93.7916% ( 61) 00:30:52.632 27.578 - 27.695: 94.5960% ( 64) 00:30:52.632 27.695 - 27.811: 95.2997% ( 56) 00:30:52.632 27.811 - 27.927: 95.8527% ( 44) 00:30:52.632 27.927 - 28.044: 96.2423% ( 31) 00:30:52.632 28.044 - 28.160: 96.6445% ( 32) 00:30:52.632 28.160 - 28.276: 96.9210% ( 22) 00:30:52.632 28.276 - 28.393: 97.1472% ( 18) 00:30:52.632 28.393 - 28.509: 97.4111% ( 21) 00:30:52.632 28.509 - 28.625: 97.6624% ( 20) 00:30:52.632 28.625 - 28.742: 97.8887% ( 18) 00:30:52.632 28.742 - 28.858: 98.0772% ( 15) 00:30:52.632 28.858 - 28.975: 98.2280% ( 12) 00:30:52.632 28.975 - 29.091: 98.4416% ( 17) 00:30:52.632 29.091 - 29.207: 98.6050% ( 13) 00:30:52.632 29.207 - 29.324: 98.6804% ( 6) 00:30:52.632 29.324 - 29.440: 98.7558% ( 6) 00:30:52.632 29.440 - 29.556: 98.8438% ( 7) 00:30:52.632 29.556 - 29.673: 98.8941% ( 4) 00:30:52.632 29.673 - 29.789: 98.9192% ( 2) 00:30:52.632 29.789 - 30.022: 99.0323% ( 9) 00:30:52.632 30.022 - 30.255: 99.0951% ( 5) 00:30:52.632 30.255 - 30.487: 99.1454% ( 4) 00:30:52.632 30.487 - 30.720: 99.2082% ( 5) 00:30:52.632 30.720 - 30.953: 99.2208% ( 1) 00:30:52.632 30.953 - 31.185: 99.2334% ( 1) 00:30:52.632 31.418 - 31.651: 99.2459% ( 1) 00:30:52.632 31.651 - 31.884: 99.2585% ( 1) 00:30:52.632 31.884 - 32.116: 99.2711% ( 1) 00:30:52.632 32.349 - 32.582: 99.3088% ( 3) 00:30:52.632 32.582 - 32.815: 99.3465% ( 3) 00:30:52.632 33.047 - 33.280: 99.3716% ( 2) 00:30:52.632 33.280 - 33.513: 99.3842% ( 1) 00:30:52.632 33.513 - 33.745: 99.3968% ( 1) 00:30:52.632 33.745 - 33.978: 99.4093% ( 1) 00:30:52.632 33.978 - 34.211: 99.4345% ( 2) 00:30:52.632 34.211 - 34.444: 99.4973% ( 5) 00:30:52.632 34.444 - 34.676: 99.5099% ( 1) 00:30:52.632 34.676 - 34.909: 99.5224% ( 1) 00:30:52.632 35.142 - 35.375: 99.5476% ( 2) 00:30:52.632 35.375 - 35.607: 99.5727% ( 2) 00:30:52.632 35.607 - 35.840: 99.5853% ( 1) 00:30:52.632 36.073 - 36.305: 99.5978% ( 1) 00:30:52.632 36.305 - 36.538: 99.6104% ( 1) 00:30:52.632 36.538 - 36.771: 99.6355% ( 2) 00:30:52.632 36.771 - 37.004: 99.6481% ( 1) 00:30:52.632 37.236 - 37.469: 99.6607% ( 1) 00:30:52.632 37.935 - 38.167: 99.6858% ( 2) 00:30:52.632 39.098 - 39.331: 99.6984% ( 1) 00:30:52.632 39.331 - 39.564: 99.7235% ( 2) 00:30:52.632 39.564 - 39.796: 99.7361% ( 1) 00:30:52.632 40.727 - 40.960: 99.7486% ( 1) 00:30:52.632 40.960 - 41.193: 99.7612% ( 1) 00:30:52.632 41.891 - 42.124: 99.7738% ( 1) 00:30:52.632 42.589 - 42.822: 99.7864% ( 1) 00:30:52.632 42.822 - 43.055: 99.7989% ( 1) 00:30:52.632 43.055 - 43.287: 99.8115% ( 1) 00:30:52.632 43.287 - 43.520: 99.8241% ( 1) 00:30:52.632 43.520 - 43.753: 99.8366% ( 1) 00:30:52.632 43.753 - 43.985: 99.8492% ( 1) 00:30:52.632 44.451 - 44.684: 99.8743% ( 2) 00:30:52.632 44.684 - 44.916: 99.8869% ( 1) 00:30:52.632 45.149 - 45.382: 99.8995% ( 1) 00:30:52.632 45.382 - 45.615: 99.9120% ( 1) 00:30:52.632 45.615 - 45.847: 99.9246% ( 1) 00:30:52.632 48.407 - 48.640: 99.9372% ( 1) 00:30:52.632 51.200 - 51.433: 99.9497% ( 1) 00:30:52.632 51.433 - 51.665: 99.9623% ( 1) 00:30:52.632 52.596 - 52.829: 99.9749% ( 1) 00:30:52.632 53.295 - 53.527: 99.9874% ( 1) 00:30:52.632 60.044 - 60.509: 100.0000% ( 1) 00:30:52.632 00:30:52.632 Complete histogram 00:30:52.632 ================== 00:30:52.632 Range in us Cumulative Count 00:30:52.632 7.913 - 7.971: 0.0754% ( 6) 00:30:52.632 7.971 - 8.029: 0.1257% ( 4) 00:30:52.632 8.029 - 8.087: 0.2262% ( 8) 00:30:52.632 8.087 - 8.145: 0.5530% ( 26) 00:30:52.632 8.145 - 8.204: 1.1436% ( 47) 00:30:52.632 8.204 - 8.262: 2.0108% ( 69) 00:30:52.632 8.262 - 8.320: 2.7774% ( 61) 00:30:52.632 8.320 - 8.378: 5.4543% ( 213) 00:30:52.632 8.378 - 8.436: 8.8853% ( 273) 00:30:52.632 8.436 - 8.495: 11.2605% ( 189) 00:30:52.632 8.495 - 8.553: 12.9948% ( 138) 00:30:52.632 8.553 - 8.611: 16.3127% ( 264) 00:30:52.632 8.611 - 8.669: 22.1189% ( 462) 00:30:52.632 8.669 - 8.727: 26.6558% ( 361) 00:30:52.632 8.727 - 8.785: 31.1424% ( 357) 00:30:52.632 8.785 - 8.844: 34.2843% ( 250) 00:30:52.632 8.844 - 8.902: 40.8822% ( 525) 00:30:52.632 8.902 - 8.960: 47.1157% ( 496) 00:30:52.632 8.960 - 9.018: 50.6221% ( 279) 00:30:52.632 9.018 - 9.076: 53.2110% ( 206) 00:30:52.632 9.076 - 9.135: 56.3403% ( 249) 00:30:52.632 9.135 - 9.193: 63.5038% ( 570) 00:30:52.632 9.193 - 9.251: 69.3100% ( 462) 00:30:52.632 9.251 - 9.309: 72.9044% ( 286) 00:30:52.632 9.309 - 9.367: 74.8649% ( 156) 00:30:52.632 9.367 - 9.425: 76.1217% ( 100) 00:30:52.632 9.425 - 9.484: 78.0068% ( 150) 00:30:52.632 9.484 - 9.542: 79.2635% ( 100) 00:30:52.632 9.542 - 9.600: 80.1181% ( 68) 00:30:52.632 9.600 - 9.658: 80.6837% ( 45) 00:30:52.632 9.658 - 9.716: 81.0104% ( 26) 00:30:52.632 9.716 - 9.775: 81.3121% ( 24) 00:30:52.632 9.775 - 9.833: 81.5885% ( 22) 00:30:52.632 9.833 - 9.891: 81.7896% ( 16) 00:30:52.632 9.891 - 9.949: 82.0410% ( 20) 00:30:52.632 9.949 - 10.007: 82.3677% ( 26) 00:30:52.632 10.007 - 10.065: 82.5437% ( 14) 00:30:52.632 10.065 - 10.124: 82.9458% ( 32) 00:30:52.632 10.124 - 10.182: 83.2475% ( 24) 00:30:52.632 10.182 - 10.240: 83.4611% ( 17) 00:30:52.632 10.240 - 10.298: 83.7627% ( 24) 00:30:52.632 10.298 - 10.356: 83.9135% ( 12) 00:30:52.632 10.356 - 10.415: 84.0392% ( 10) 00:30:52.632 10.415 - 10.473: 84.1398% ( 8) 00:30:52.632 10.473 - 10.531: 84.2403% ( 8) 00:30:52.632 10.531 - 10.589: 84.3157% ( 6) 00:30:52.632 10.589 - 10.647: 84.4414% ( 10) 00:30:52.632 10.647 - 10.705: 84.4791% ( 3) 00:30:52.632 10.705 - 10.764: 84.5168% ( 3) 00:30:52.632 10.764 - 10.822: 84.5545% ( 3) 00:30:52.632 10.822 - 10.880: 84.7053% ( 12) 00:30:52.632 10.880 - 10.938: 84.7933% ( 7) 00:30:52.632 10.938 - 10.996: 84.8812% ( 7) 00:30:52.632 10.996 - 11.055: 85.0069% ( 10) 00:30:52.632 11.055 - 11.113: 85.0697% ( 5) 00:30:52.632 11.113 - 11.171: 85.1577% ( 7) 00:30:52.633 11.171 - 11.229: 85.2457% ( 7) 00:30:52.633 11.229 - 11.287: 85.3085% ( 5) 00:30:52.633 11.287 - 11.345: 85.3462% ( 3) 00:30:52.633 11.345 - 11.404: 85.3588% ( 1) 00:30:52.633 11.462 - 11.520: 85.4091% ( 4) 00:30:52.633 11.520 - 11.578: 85.4845% ( 6) 00:30:52.633 11.578 - 11.636: 85.5096% ( 2) 00:30:52.633 11.636 - 11.695: 85.5976% ( 7) 00:30:52.633 11.695 - 11.753: 85.6353% ( 3) 00:30:52.633 11.753 - 11.811: 85.6479% ( 1) 00:30:52.633 11.811 - 11.869: 85.6730% ( 2) 00:30:52.633 11.869 - 11.927: 85.7233% ( 4) 00:30:52.633 11.927 - 11.985: 85.7484% ( 2) 00:30:52.633 11.985 - 12.044: 85.7610% ( 1) 00:30:52.633 12.044 - 12.102: 85.7735% ( 1) 00:30:52.633 12.102 - 12.160: 85.7861% ( 1) 00:30:52.633 12.218 - 12.276: 85.7987% ( 1) 00:30:52.633 12.276 - 12.335: 85.8112% ( 1) 00:30:52.633 12.335 - 12.393: 85.8364% ( 2) 00:30:52.633 12.393 - 12.451: 85.8866% ( 4) 00:30:52.633 12.509 - 12.567: 85.9118% ( 2) 00:30:52.633 12.800 - 12.858: 85.9369% ( 2) 00:30:52.633 12.858 - 12.916: 85.9746% ( 3) 00:30:52.633 13.091 - 13.149: 85.9872% ( 1) 00:30:52.633 13.149 - 13.207: 85.9997% ( 1) 00:30:52.633 13.207 - 13.265: 86.0249% ( 2) 00:30:52.633 13.265 - 13.324: 86.0626% ( 3) 00:30:52.633 13.382 - 13.440: 86.0752% ( 1) 00:30:52.633 13.440 - 13.498: 86.1129% ( 3) 00:30:52.633 13.498 - 13.556: 86.1380% ( 2) 00:30:52.633 13.556 - 13.615: 86.1506% ( 1) 00:30:52.633 13.615 - 13.673: 86.1631% ( 1) 00:30:52.633 13.731 - 13.789: 86.1757% ( 1) 00:30:52.633 13.789 - 13.847: 86.2008% ( 2) 00:30:52.633 13.847 - 13.905: 86.2385% ( 3) 00:30:52.633 13.905 - 13.964: 86.2511% ( 1) 00:30:52.633 13.964 - 14.022: 86.2637% ( 1) 00:30:52.633 14.138 - 14.196: 86.2888% ( 2) 00:30:52.633 14.196 - 14.255: 86.3139% ( 2) 00:30:52.633 14.255 - 14.313: 86.3391% ( 2) 00:30:52.633 14.313 - 14.371: 86.3642% ( 2) 00:30:52.633 14.371 - 14.429: 86.3768% ( 1) 00:30:52.633 14.487 - 14.545: 86.3893% ( 1) 00:30:52.633 14.545 - 14.604: 86.4145% ( 2) 00:30:52.633 14.604 - 14.662: 86.4270% ( 1) 00:30:52.633 14.662 - 14.720: 86.4522% ( 2) 00:30:52.633 14.720 - 14.778: 86.4773% ( 2) 00:30:52.633 14.778 - 14.836: 86.4899% ( 1) 00:30:52.633 14.836 - 14.895: 86.5276% ( 3) 00:30:52.633 14.895 - 15.011: 86.5904% ( 5) 00:30:52.633 15.011 - 15.127: 86.6910% ( 8) 00:30:52.633 15.127 - 15.244: 86.7538% ( 5) 00:30:52.633 15.244 - 15.360: 86.8292% ( 6) 00:30:52.633 15.360 - 15.476: 86.8920% ( 5) 00:30:52.633 15.476 - 15.593: 86.9297% ( 3) 00:30:52.633 15.593 - 15.709: 86.9549% ( 2) 00:30:52.633 15.709 - 15.825: 86.9926% ( 3) 00:30:52.633 15.825 - 15.942: 87.0177% ( 2) 00:30:52.633 15.942 - 16.058: 87.0680% ( 4) 00:30:52.633 16.058 - 16.175: 87.1685% ( 8) 00:30:52.633 16.175 - 16.291: 87.1937% ( 2) 00:30:52.633 16.291 - 16.407: 87.2188% ( 2) 00:30:52.633 16.407 - 16.524: 87.2565% ( 3) 00:30:52.633 16.524 - 16.640: 87.3193% ( 5) 00:30:52.633 16.640 - 16.756: 87.3570% ( 3) 00:30:52.633 16.756 - 16.873: 87.4073% ( 4) 00:30:52.633 16.989 - 17.105: 87.4702% ( 5) 00:30:52.633 17.105 - 17.222: 87.4953% ( 2) 00:30:52.633 17.222 - 17.338: 87.5330% ( 3) 00:30:52.633 17.338 - 17.455: 87.5581% ( 2) 00:30:52.633 17.455 - 17.571: 87.5958% ( 3) 00:30:52.633 17.571 - 17.687: 87.6335% ( 3) 00:30:52.633 17.687 - 17.804: 87.6712% ( 3) 00:30:52.633 17.804 - 17.920: 87.6964% ( 2) 00:30:52.633 17.920 - 18.036: 87.7341% ( 3) 00:30:52.633 18.036 - 18.153: 87.7592% ( 2) 00:30:52.633 18.153 - 18.269: 87.7969% ( 3) 00:30:52.633 18.269 - 18.385: 87.8346% ( 3) 00:30:52.633 18.385 - 18.502: 87.8723% ( 3) 00:30:52.633 18.502 - 18.618: 87.9100% ( 3) 00:30:52.633 18.618 - 18.735: 87.9352% ( 2) 00:30:52.633 18.735 - 18.851: 87.9980% ( 5) 00:30:52.633 18.851 - 18.967: 88.0608% ( 5) 00:30:52.633 18.967 - 19.084: 88.0860% ( 2) 00:30:52.633 19.084 - 19.200: 88.1362% ( 4) 00:30:52.633 19.200 - 19.316: 88.1739% ( 3) 00:30:52.633 19.433 - 19.549: 88.2242% ( 4) 00:30:52.633 19.549 - 19.665: 88.2368% ( 1) 00:30:52.633 19.665 - 19.782: 88.2619% ( 2) 00:30:52.633 19.782 - 19.898: 88.3122% ( 4) 00:30:52.633 19.898 - 20.015: 88.3499% ( 3) 00:30:52.633 20.015 - 20.131: 88.4127% ( 5) 00:30:52.633 20.247 - 20.364: 88.4379% ( 2) 00:30:52.633 20.364 - 20.480: 88.4630% ( 2) 00:30:52.633 20.480 - 20.596: 88.5007% ( 3) 00:30:52.633 20.596 - 20.713: 88.5133% ( 1) 00:30:52.633 20.713 - 20.829: 88.5258% ( 1) 00:30:52.633 21.178 - 21.295: 88.5384% ( 1) 00:30:52.633 21.527 - 21.644: 88.5510% ( 1) 00:30:52.633 21.993 - 22.109: 88.5635% ( 1) 00:30:52.633 22.109 - 22.225: 88.5761% ( 1) 00:30:52.633 22.342 - 22.458: 88.5887% ( 1) 00:30:52.633 22.575 - 22.691: 88.6012% ( 1) 00:30:52.633 22.807 - 22.924: 88.6641% ( 5) 00:30:52.633 22.924 - 23.040: 88.7772% ( 9) 00:30:52.633 23.040 - 23.156: 89.0285% ( 20) 00:30:52.633 23.156 - 23.273: 89.3427% ( 25) 00:30:52.633 23.273 - 23.389: 89.7951% ( 36) 00:30:52.633 23.389 - 23.505: 90.9514% ( 92) 00:30:52.633 23.505 - 23.622: 92.1201% ( 93) 00:30:52.633 23.622 - 23.738: 93.2387% ( 89) 00:30:52.633 23.738 - 23.855: 94.2566% ( 81) 00:30:52.633 23.855 - 23.971: 95.2118% ( 76) 00:30:52.633 23.971 - 24.087: 95.9910% ( 62) 00:30:52.633 24.087 - 24.204: 96.6947% ( 56) 00:30:52.633 24.204 - 24.320: 97.1095% ( 33) 00:30:52.633 24.320 - 24.436: 97.3608% ( 20) 00:30:52.633 24.436 - 24.553: 97.6373% ( 22) 00:30:52.633 24.553 - 24.669: 97.7881% ( 12) 00:30:52.633 24.669 - 24.785: 97.8887% ( 8) 00:30:52.633 24.785 - 24.902: 97.9892% ( 8) 00:30:52.633 24.902 - 25.018: 98.1777% ( 15) 00:30:52.633 25.018 - 25.135: 98.3034% ( 10) 00:30:52.633 25.135 - 25.251: 98.4039% ( 8) 00:30:52.633 25.251 - 25.367: 98.4793% ( 6) 00:30:52.633 25.367 - 25.484: 98.5296% ( 4) 00:30:52.633 25.484 - 25.600: 98.6553% ( 10) 00:30:52.633 25.600 - 25.716: 98.7307% ( 6) 00:30:52.633 25.716 - 25.833: 98.7684% ( 3) 00:30:52.633 25.833 - 25.949: 98.8061% ( 3) 00:30:52.633 25.949 - 26.065: 98.8689% ( 5) 00:30:52.633 26.065 - 26.182: 98.9318% ( 5) 00:30:52.633 26.182 - 26.298: 98.9820% ( 4) 00:30:52.633 26.298 - 26.415: 99.0072% ( 2) 00:30:52.633 26.415 - 26.531: 99.0197% ( 1) 00:30:52.633 26.531 - 26.647: 99.0323% ( 1) 00:30:52.633 26.647 - 26.764: 99.0574% ( 2) 00:30:52.633 26.764 - 26.880: 99.0700% ( 1) 00:30:52.633 27.345 - 27.462: 99.0826% ( 1) 00:30:52.633 28.393 - 28.509: 99.1077% ( 2) 00:30:52.633 28.742 - 28.858: 99.1203% ( 1) 00:30:52.633 28.858 - 28.975: 99.1454% ( 2) 00:30:52.633 28.975 - 29.091: 99.1580% ( 1) 00:30:52.633 29.091 - 29.207: 99.1705% ( 1) 00:30:52.633 29.440 - 29.556: 99.1957% ( 2) 00:30:52.633 29.556 - 29.673: 99.2459% ( 4) 00:30:52.633 30.022 - 30.255: 99.2711% ( 2) 00:30:52.633 30.255 - 30.487: 99.2836% ( 1) 00:30:52.633 30.720 - 30.953: 99.3339% ( 4) 00:30:52.633 30.953 - 31.185: 99.3465% ( 1) 00:30:52.633 31.185 - 31.418: 99.3716% ( 2) 00:30:52.633 31.418 - 31.651: 99.4345% ( 5) 00:30:52.633 31.651 - 31.884: 99.4470% ( 1) 00:30:52.633 31.884 - 32.116: 99.4596% ( 1) 00:30:52.633 32.582 - 32.815: 99.4847% ( 2) 00:30:52.633 33.280 - 33.513: 99.5224% ( 3) 00:30:52.633 33.745 - 33.978: 99.5350% ( 1) 00:30:52.633 33.978 - 34.211: 99.5727% ( 3) 00:30:52.633 34.211 - 34.444: 99.5853% ( 1) 00:30:52.633 34.676 - 34.909: 99.5978% ( 1) 00:30:52.633 34.909 - 35.142: 99.6104% ( 1) 00:30:52.633 35.142 - 35.375: 99.6230% ( 1) 00:30:52.633 35.840 - 36.073: 99.6355% ( 1) 00:30:52.633 36.771 - 37.004: 99.6481% ( 1) 00:30:52.633 37.004 - 37.236: 99.6607% ( 1) 00:30:52.633 38.865 - 39.098: 99.6732% ( 1) 00:30:52.633 39.564 - 39.796: 99.6858% ( 1) 00:30:52.633 40.029 - 40.262: 99.6984% ( 1) 00:30:52.633 41.891 - 42.124: 99.7109% ( 1) 00:30:52.633 42.124 - 42.356: 99.7235% ( 1) 00:30:52.633 42.356 - 42.589: 99.7486% ( 2) 00:30:52.633 43.985 - 44.218: 99.7612% ( 1) 00:30:52.633 46.313 - 46.545: 99.7864% ( 2) 00:30:52.633 46.545 - 46.778: 99.8115% ( 2) 00:30:52.633 46.778 - 47.011: 99.8241% ( 1) 00:30:52.633 47.476 - 47.709: 99.8366% ( 1) 00:30:52.633 47.942 - 48.175: 99.8492% ( 1) 00:30:52.633 52.596 - 52.829: 99.8618% ( 1) 00:30:52.633 56.087 - 56.320: 99.8743% ( 1) 00:30:52.633 57.484 - 57.716: 99.8869% ( 1) 00:30:52.633 65.164 - 65.629: 99.8995% ( 1) 00:30:52.633 72.145 - 72.611: 99.9120% ( 1) 00:30:52.633 80.058 - 80.524: 99.9246% ( 1) 00:30:52.633 87.971 - 88.436: 99.9372% ( 1) 00:30:52.633 90.764 - 91.229: 99.9497% ( 1) 00:30:52.633 104.727 - 105.193: 99.9623% ( 1) 00:30:52.633 107.985 - 108.451: 99.9749% ( 1) 00:30:52.633 305.338 - 307.200: 99.9874% ( 1) 00:30:52.633 480.349 - 484.073: 100.0000% ( 1) 00:30:52.633 00:30:52.633 00:30:52.633 real 0m1.315s 00:30:52.633 user 0m1.126s 00:30:52.634 sys 0m0.109s 00:30:52.634 05:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.634 05:13:22 -- common/autotest_common.sh@10 -- # set +x 00:30:52.634 ************************************ 00:30:52.634 END TEST nvme_overhead 00:30:52.634 ************************************ 00:30:52.634 05:13:22 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:52.634 05:13:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:52.634 05:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:52.634 05:13:22 -- common/autotest_common.sh@10 -- # set +x 00:30:52.634 ************************************ 00:30:52.634 START TEST nvme_arbitration 00:30:52.634 ************************************ 00:30:52.634 05:13:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:55.914 Initializing NVMe Controllers 00:30:55.914 Attached to 0000:00:06.0 00:30:55.914 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:55.914 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:55.914 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:55.914 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:55.914 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:55.914 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:55.914 Initialization complete. Launching workers. 00:30:55.914 Starting thread on core 1 with urgent priority queue 00:30:55.914 Starting thread on core 2 with urgent priority queue 00:30:55.914 Starting thread on core 3 with urgent priority queue 00:30:55.914 Starting thread on core 0 with urgent priority queue 00:30:55.914 QEMU NVMe Ctrl (12340 ) core 0: 7097.00 IO/s 14.09 secs/100000 ios 00:30:55.914 QEMU NVMe Ctrl (12340 ) core 1: 6872.33 IO/s 14.55 secs/100000 ios 00:30:55.914 QEMU NVMe Ctrl (12340 ) core 2: 3552.33 IO/s 28.15 secs/100000 ios 00:30:55.914 QEMU NVMe Ctrl (12340 ) core 3: 3813.67 IO/s 26.22 secs/100000 ios 00:30:55.914 ======================================================== 00:30:55.914 00:30:55.914 00:30:55.914 real 0m3.362s 00:30:55.914 user 0m9.218s 00:30:55.914 sys 0m0.120s 00:30:55.914 05:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.914 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:30:55.914 ************************************ 00:30:55.914 END TEST nvme_arbitration 00:30:55.914 ************************************ 00:30:55.914 05:13:25 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:55.914 05:13:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:55.914 05:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.914 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:30:55.914 ************************************ 00:30:55.914 START TEST nvme_single_aen 00:30:55.914 ************************************ 00:30:55.914 05:13:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:55.914 [2024-04-27 05:13:25.741552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:55.914 [2024-04-27 05:13:25.741689] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.173 [2024-04-27 05:13:25.935730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:56.173 Asynchronous Event Request test 00:30:56.173 Attached to 0000:00:06.0 00:30:56.173 Reset controller to setup AER completions for this process 00:30:56.173 Registering asynchronous event callbacks... 00:30:56.173 Getting orig temperature thresholds of all controllers 00:30:56.173 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:56.173 Setting all controllers temperature threshold low to trigger AER 00:30:56.173 Waiting for all controllers temperature threshold to be set lower 00:30:56.173 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:56.173 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:56.173 Waiting for all controllers to trigger AER and reset threshold 00:30:56.173 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:56.173 Cleaning up... 00:30:56.173 00:30:56.173 real 0m0.276s 00:30:56.173 user 0m0.104s 00:30:56.173 sys 0m0.091s 00:30:56.173 05:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.173 05:13:25 -- common/autotest_common.sh@10 -- # set +x 00:30:56.173 ************************************ 00:30:56.173 END TEST nvme_single_aen 00:30:56.173 ************************************ 00:30:56.173 05:13:26 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:56.173 05:13:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:56.173 05:13:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.173 05:13:26 -- common/autotest_common.sh@10 -- # set +x 00:30:56.173 ************************************ 00:30:56.173 START TEST nvme_doorbell_aers 00:30:56.173 ************************************ 00:30:56.173 05:13:26 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:30:56.173 05:13:26 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:56.173 05:13:26 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:56.173 05:13:26 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:56.173 05:13:26 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:56.173 05:13:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:56.173 05:13:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:56.173 05:13:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:56.173 05:13:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:56.173 05:13:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:56.173 05:13:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:56.173 05:13:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:56.173 05:13:26 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:56.173 05:13:26 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:56.431 [2024-04-27 05:13:26.346699] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151131) is not found. Dropping the request. 00:31:06.414 Executing: test_write_invalid_db 00:31:06.414 Waiting for AER completion... 00:31:06.414 Failure: test_write_invalid_db 00:31:06.414 00:31:06.414 Executing: test_invalid_db_write_overflow_sq 00:31:06.414 Waiting for AER completion... 00:31:06.414 Failure: test_invalid_db_write_overflow_sq 00:31:06.414 00:31:06.414 Executing: test_invalid_db_write_overflow_cq 00:31:06.414 Waiting for AER completion... 00:31:06.414 Failure: test_invalid_db_write_overflow_cq 00:31:06.414 00:31:06.414 00:31:06.414 real 0m10.105s 00:31:06.414 user 0m8.420s 00:31:06.414 sys 0m1.617s 00:31:06.414 05:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.414 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:31:06.414 ************************************ 00:31:06.414 END TEST nvme_doorbell_aers 00:31:06.414 ************************************ 00:31:06.414 05:13:36 -- nvme/nvme.sh@97 -- # uname 00:31:06.414 05:13:36 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:06.414 05:13:36 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:06.414 05:13:36 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:31:06.414 05:13:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:06.414 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:31:06.414 ************************************ 00:31:06.414 START TEST nvme_multi_aen 00:31:06.414 ************************************ 00:31:06.414 05:13:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:06.414 [2024-04-27 05:13:36.240389] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:06.414 [2024-04-27 05:13:36.240767] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.671 [2024-04-27 05:13:36.450085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:06.671 [2024-04-27 05:13:36.450166] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151131) is not found. Dropping the request. 00:31:06.671 [2024-04-27 05:13:36.450282] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151131) is not found. Dropping the request. 00:31:06.671 [2024-04-27 05:13:36.450311] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151131) is not found. Dropping the request. 00:31:06.671 [2024-04-27 05:13:36.454233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:06.671 Child process pid: 151326 00:31:06.671 [2024-04-27 05:13:36.454377] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.930 [Child] Asynchronous Event Request test 00:31:06.930 [Child] Attached to 0000:00:06.0 00:31:06.930 [Child] Registering asynchronous event callbacks... 00:31:06.930 [Child] Getting orig temperature thresholds of all controllers 00:31:06.930 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.930 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:06.930 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.930 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.930 [Child] Cleaning up... 00:31:06.930 Asynchronous Event Request test 00:31:06.930 Attached to 0000:00:06.0 00:31:06.930 Reset controller to setup AER completions for this process 00:31:06.930 Registering asynchronous event callbacks... 00:31:06.930 Getting orig temperature thresholds of all controllers 00:31:06.930 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.930 Setting all controllers temperature threshold low to trigger AER 00:31:06.930 Waiting for all controllers temperature threshold to be set lower 00:31:06.930 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.930 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:06.930 Waiting for all controllers to trigger AER and reset threshold 00:31:06.930 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.930 Cleaning up... 00:31:06.930 00:31:06.930 real 0m0.609s 00:31:06.930 user 0m0.177s 00:31:06.930 sys 0m0.246s 00:31:06.930 05:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.930 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:31:06.930 ************************************ 00:31:06.930 END TEST nvme_multi_aen 00:31:06.930 ************************************ 00:31:07.188 05:13:36 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:07.188 05:13:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:07.188 05:13:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.188 05:13:36 -- common/autotest_common.sh@10 -- # set +x 00:31:07.188 ************************************ 00:31:07.188 START TEST nvme_startup 00:31:07.188 ************************************ 00:31:07.188 05:13:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:07.447 Initializing NVMe Controllers 00:31:07.447 Attached to 0000:00:06.0 00:31:07.447 Initialization complete. 00:31:07.447 Time used:189847.938 (us). 00:31:07.447 00:31:07.447 real 0m0.270s 00:31:07.447 user 0m0.105s 00:31:07.447 sys 0m0.105s 00:31:07.447 05:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.447 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:31:07.447 ************************************ 00:31:07.447 END TEST nvme_startup 00:31:07.447 ************************************ 00:31:07.447 05:13:37 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:07.447 05:13:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:07.447 05:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.447 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:31:07.447 ************************************ 00:31:07.447 START TEST nvme_multi_secondary 00:31:07.447 ************************************ 00:31:07.447 05:13:37 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:31:07.447 05:13:37 -- nvme/nvme.sh@52 -- # pid0=151385 00:31:07.447 05:13:37 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:07.447 05:13:37 -- nvme/nvme.sh@54 -- # pid1=151386 00:31:07.447 05:13:37 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:07.447 05:13:37 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:10.728 Initializing NVMe Controllers 00:31:10.728 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:10.728 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:10.728 Initialization complete. Launching workers. 00:31:10.728 ======================================================== 00:31:10.728 Latency(us) 00:31:10.728 Device Information : IOPS MiB/s Average min max 00:31:10.728 PCIE (0000:00:06.0) NSID 1 from core 1: 35422.82 138.37 451.38 113.41 2037.36 00:31:10.728 ======================================================== 00:31:10.728 Total : 35422.82 138.37 451.38 113.41 2037.36 00:31:10.728 00:31:10.987 Initializing NVMe Controllers 00:31:10.987 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:10.987 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:10.987 Initialization complete. Launching workers. 00:31:10.987 ======================================================== 00:31:10.987 Latency(us) 00:31:10.987 Device Information : IOPS MiB/s Average min max 00:31:10.987 PCIE (0000:00:06.0) NSID 1 from core 2: 14807.15 57.84 1079.57 127.68 20347.29 00:31:10.987 ======================================================== 00:31:10.987 Total : 14807.15 57.84 1079.57 127.68 20347.29 00:31:10.987 00:31:10.987 05:13:40 -- nvme/nvme.sh@56 -- # wait 151385 00:31:12.890 Initializing NVMe Controllers 00:31:12.890 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:12.890 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:12.890 Initialization complete. Launching workers. 00:31:12.890 ======================================================== 00:31:12.890 Latency(us) 00:31:12.890 Device Information : IOPS MiB/s Average min max 00:31:12.890 PCIE (0000:00:06.0) NSID 1 from core 0: 43018.60 168.04 371.61 82.00 1828.88 00:31:12.890 ======================================================== 00:31:12.890 Total : 43018.60 168.04 371.61 82.00 1828.88 00:31:12.890 00:31:12.890 05:13:42 -- nvme/nvme.sh@57 -- # wait 151386 00:31:12.890 05:13:42 -- nvme/nvme.sh@61 -- # pid0=151461 00:31:12.890 05:13:42 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:12.890 05:13:42 -- nvme/nvme.sh@63 -- # pid1=151462 00:31:12.890 05:13:42 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:12.890 05:13:42 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:16.178 Initializing NVMe Controllers 00:31:16.178 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:16.178 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:16.178 Initialization complete. Launching workers. 00:31:16.178 ======================================================== 00:31:16.178 Latency(us) 00:31:16.178 Device Information : IOPS MiB/s Average min max 00:31:16.178 PCIE (0000:00:06.0) NSID 1 from core 1: 35522.35 138.76 450.17 115.22 1443.35 00:31:16.178 ======================================================== 00:31:16.178 Total : 35522.35 138.76 450.17 115.22 1443.35 00:31:16.178 00:31:16.437 Initializing NVMe Controllers 00:31:16.437 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:16.437 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:16.437 Initialization complete. Launching workers. 00:31:16.437 ======================================================== 00:31:16.437 Latency(us) 00:31:16.437 Device Information : IOPS MiB/s Average min max 00:31:16.437 PCIE (0000:00:06.0) NSID 1 from core 0: 35124.40 137.20 455.22 121.64 1794.15 00:31:16.437 ======================================================== 00:31:16.437 Total : 35124.40 137.20 455.22 121.64 1794.15 00:31:16.437 00:31:18.439 Initializing NVMe Controllers 00:31:18.439 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:18.439 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:18.439 Initialization complete. Launching workers. 00:31:18.439 ======================================================== 00:31:18.439 Latency(us) 00:31:18.439 Device Information : IOPS MiB/s Average min max 00:31:18.439 PCIE (0000:00:06.0) NSID 1 from core 2: 17836.30 69.67 896.77 140.71 28265.68 00:31:18.439 ======================================================== 00:31:18.439 Total : 17836.30 69.67 896.77 140.71 28265.68 00:31:18.439 00:31:18.439 05:13:47 -- nvme/nvme.sh@65 -- # wait 151461 00:31:18.439 05:13:47 -- nvme/nvme.sh@66 -- # wait 151462 00:31:18.439 00:31:18.439 real 0m10.676s 00:31:18.439 user 0m18.618s 00:31:18.439 sys 0m0.787s 00:31:18.439 05:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.439 05:13:47 -- common/autotest_common.sh@10 -- # set +x 00:31:18.439 ************************************ 00:31:18.439 END TEST nvme_multi_secondary 00:31:18.439 ************************************ 00:31:18.439 05:13:47 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:18.439 05:13:47 -- nvme/nvme.sh@102 -- # kill_stub 00:31:18.439 05:13:47 -- common/autotest_common.sh@1065 -- # [[ -e /proc/150679 ]] 00:31:18.439 05:13:47 -- common/autotest_common.sh@1066 -- # kill 150679 00:31:18.439 05:13:47 -- common/autotest_common.sh@1067 -- # wait 150679 00:31:19.008 [2024-04-27 05:13:48.802102] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151325) is not found. Dropping the request. 00:31:19.008 [2024-04-27 05:13:48.802294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151325) is not found. Dropping the request. 00:31:19.008 [2024-04-27 05:13:48.802372] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151325) is not found. Dropping the request. 00:31:19.008 [2024-04-27 05:13:48.802440] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 151325) is not found. Dropping the request. 00:31:19.008 05:13:48 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:19.008 05:13:48 -- common/autotest_common.sh@1073 -- # echo 2 00:31:19.008 05:13:48 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:19.008 05:13:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:19.008 05:13:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:19.008 05:13:48 -- common/autotest_common.sh@10 -- # set +x 00:31:19.008 ************************************ 00:31:19.008 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:19.008 ************************************ 00:31:19.008 05:13:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:19.267 * Looking for test storage... 00:31:19.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:19.267 05:13:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:19.267 05:13:48 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:19.267 05:13:48 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:19.267 05:13:48 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:19.267 05:13:48 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:19.267 05:13:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:19.267 05:13:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:19.267 05:13:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:19.267 05:13:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:19.267 05:13:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:19.267 05:13:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:19.267 05:13:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:19.267 05:13:49 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=151618 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 151618 00:31:19.267 05:13:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:19.267 05:13:49 -- common/autotest_common.sh@819 -- # '[' -z 151618 ']' 00:31:19.267 05:13:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.267 05:13:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:19.267 05:13:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.267 05:13:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:19.267 05:13:49 -- common/autotest_common.sh@10 -- # set +x 00:31:19.267 [2024-04-27 05:13:49.124737] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:19.268 [2024-04-27 05:13:49.125018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151618 ] 00:31:19.527 [2024-04-27 05:13:49.332448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.527 [2024-04-27 05:13:49.450542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:19.527 [2024-04-27 05:13:49.451026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.527 [2024-04-27 05:13:49.451262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.527 [2024-04-27 05:13:49.451189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.527 [2024-04-27 05:13:49.451648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.464 05:13:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:20.464 05:13:50 -- common/autotest_common.sh@852 -- # return 0 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:20.464 05:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.464 05:13:50 -- common/autotest_common.sh@10 -- # set +x 00:31:20.464 nvme0n1 00:31:20.464 05:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WM0Db.txt 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:20.464 05:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.464 05:13:50 -- common/autotest_common.sh@10 -- # set +x 00:31:20.464 true 00:31:20.464 05:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1714194830 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=151646 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:20.464 05:13:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:22.369 05:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.369 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.369 [2024-04-27 05:13:52.150298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:22.369 [2024-04-27 05:13:52.150984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:22.369 [2024-04-27 05:13:52.151303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:22.369 [2024-04-27 05:13:52.151495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.369 [2024-04-27 05:13:52.153683] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:22.369 05:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.369 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 151646 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 151646 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 151646 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.369 05:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.369 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.369 05:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WM0Db.txt 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WM0Db.txt 00:31:22.369 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 151618 00:31:22.369 05:13:52 -- common/autotest_common.sh@926 -- # '[' -z 151618 ']' 00:31:22.369 05:13:52 -- common/autotest_common.sh@930 -- # kill -0 151618 00:31:22.369 05:13:52 -- common/autotest_common.sh@931 -- # uname 00:31:22.369 05:13:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:22.369 05:13:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 151618 00:31:22.369 05:13:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:22.369 killing process with pid 151618 00:31:22.369 05:13:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:22.369 05:13:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 151618' 00:31:22.369 05:13:52 -- common/autotest_common.sh@945 -- # kill 151618 00:31:22.369 05:13:52 -- common/autotest_common.sh@950 -- # wait 151618 00:31:23.306 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:23.306 05:13:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:23.306 00:31:23.306 real 0m4.078s 00:31:23.306 user 0m14.282s 00:31:23.306 sys 0m0.745s 00:31:23.306 ************************************ 00:31:23.306 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:23.306 ************************************ 00:31:23.306 05:13:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.306 05:13:52 -- common/autotest_common.sh@10 -- # set +x 00:31:23.306 05:13:53 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:23.306 05:13:53 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:23.306 05:13:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:23.306 05:13:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.306 05:13:53 -- common/autotest_common.sh@10 -- # set +x 00:31:23.306 ************************************ 00:31:23.306 START TEST nvme_fio 00:31:23.306 ************************************ 00:31:23.306 05:13:53 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:23.306 05:13:53 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:23.306 05:13:53 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:23.306 05:13:53 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:23.306 05:13:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:23.306 05:13:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:23.306 05:13:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:23.306 05:13:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:23.306 05:13:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:23.306 05:13:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:23.306 05:13:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:23.306 05:13:53 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:31:23.306 05:13:53 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:23.306 05:13:53 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:23.306 05:13:53 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:23.306 05:13:53 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:23.564 05:13:53 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:23.564 05:13:53 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:23.823 05:13:53 -- nvme/nvme.sh@41 -- # bs=4096 00:31:23.823 05:13:53 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:23.823 05:13:53 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:23.823 05:13:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:23.823 05:13:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.823 05:13:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:23.823 05:13:53 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:23.823 05:13:53 -- common/autotest_common.sh@1320 -- # shift 00:31:23.823 05:13:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:23.823 05:13:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.823 05:13:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:23.823 05:13:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:23.823 05:13:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:23.823 05:13:53 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:23.823 05:13:53 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:23.823 05:13:53 -- common/autotest_common.sh@1326 -- # break 00:31:23.823 05:13:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:23.823 05:13:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:23.823 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:23.823 fio-3.35 00:31:23.823 Starting 1 thread 00:31:27.106 00:31:27.106 test: (groupid=0, jobs=1): err= 0: pid=151783: Sat Apr 27 05:13:56 2024 00:31:27.106 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:31:27.106 slat (nsec): min=4328, max=84366, avg=5817.78, stdev=2540.57 00:31:27.106 clat (usec): min=227, max=10795, avg=3657.21, stdev=410.22 00:31:27.106 lat (usec): min=232, max=10880, avg=3663.03, stdev=410.74 00:31:27.106 clat percentiles (usec): 00:31:27.106 | 1.00th=[ 3097], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3392], 00:31:27.106 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3654], 00:31:27.106 | 70.00th=[ 3720], 80.00th=[ 3818], 90.00th=[ 4015], 95.00th=[ 4359], 00:31:27.106 | 99.00th=[ 4883], 99.50th=[ 5997], 99.90th=[ 7046], 99.95th=[ 8717], 00:31:27.106 | 99.99th=[10683] 00:31:27.106 bw ( KiB/s): min=65184, max=70552, per=98.71%, avg=68693.33, stdev=3040.95, samples=3 00:31:27.106 iops : min=16296, max=17638, avg=17173.33, stdev=760.24, samples=3 00:31:27.106 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec); 0 zone resets 00:31:27.106 slat (nsec): min=4433, max=72482, avg=6133.68, stdev=2705.59 00:31:27.106 clat (usec): min=267, max=10717, avg=3673.14, stdev=418.99 00:31:27.106 lat (usec): min=273, max=10740, avg=3679.27, stdev=419.51 00:31:27.106 clat percentiles (usec): 00:31:27.106 | 1.00th=[ 3130], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3425], 00:31:27.106 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:31:27.106 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4359], 00:31:27.106 | 99.00th=[ 4883], 99.50th=[ 6194], 99.90th=[ 7111], 99.95th=[ 8979], 00:31:27.106 | 99.99th=[10552] 00:31:27.106 bw ( KiB/s): min=65648, max=70144, per=98.41%, avg=68554.67, stdev=2520.92, samples=3 00:31:27.106 iops : min=16412, max=17536, avg=17138.67, stdev=630.23, samples=3 00:31:27.106 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:27.106 lat (msec) : 2=0.05%, 4=89.67%, 10=10.22%, 20=0.03% 00:31:27.106 cpu : usr=99.95%, sys=0.00%, ctx=2, majf=0, minf=37 00:31:27.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.107 issued rwts: total=34811,34849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.107 00:31:27.107 Run status group 0 (all jobs): 00:31:27.107 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:31:27.107 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:31:27.366 ----------------------------------------------------- 00:31:27.366 Suppressions used: 00:31:27.366 count bytes template 00:31:27.366 1 32 /usr/src/fio/parse.c 00:31:27.366 ----------------------------------------------------- 00:31:27.366 00:31:27.366 ************************************ 00:31:27.366 END TEST nvme_fio 00:31:27.366 ************************************ 00:31:27.366 05:13:57 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:27.366 05:13:57 -- nvme/nvme.sh@46 -- # true 00:31:27.366 00:31:27.366 real 0m4.173s 00:31:27.366 user 0m3.413s 00:31:27.366 sys 0m0.436s 00:31:27.366 05:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.366 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:27.366 ************************************ 00:31:27.366 END TEST nvme 00:31:27.366 ************************************ 00:31:27.366 00:31:27.366 real 0m47.402s 00:31:27.366 user 1m58.439s 00:31:27.366 sys 0m10.441s 00:31:27.366 05:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.366 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:27.625 05:13:57 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:27.625 05:13:57 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:27.625 05:13:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:27.625 05:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.625 05:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:27.625 ************************************ 00:31:27.625 START TEST nvme_scc 00:31:27.625 ************************************ 00:31:27.625 05:13:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:27.625 * Looking for test storage... 00:31:27.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:27.625 05:13:57 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:27.625 05:13:57 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:27.625 05:13:57 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:27.625 05:13:57 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:27.625 05:13:57 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:27.625 05:13:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.625 05:13:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.625 05:13:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.625 05:13:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:27.625 05:13:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:27.625 05:13:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:27.625 05:13:57 -- paths/export.sh@5 -- # export PATH 00:31:27.625 05:13:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:27.625 05:13:57 -- nvme/functions.sh@10 -- # ctrls=() 00:31:27.625 05:13:57 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:27.625 05:13:57 -- nvme/functions.sh@11 -- # nvmes=() 00:31:27.625 05:13:57 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:27.625 05:13:57 -- nvme/functions.sh@12 -- # bdfs=() 00:31:27.625 05:13:57 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:27.625 05:13:57 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:27.625 05:13:57 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:27.625 05:13:57 -- nvme/functions.sh@14 -- # nvme_name= 00:31:27.625 05:13:57 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:27.625 05:13:57 -- nvme/nvme_scc.sh@12 -- # uname 00:31:27.625 05:13:57 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:27.625 05:13:57 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:27.625 05:13:57 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:27.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:27.883 Waiting for block devices as requested 00:31:27.883 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.144 05:13:57 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:28.144 05:13:57 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:28.144 05:13:57 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:28.144 05:13:57 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:28.144 05:13:57 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:28.144 05:13:57 -- scripts/common.sh@15 -- # local i 00:31:28.144 05:13:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:28.144 05:13:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:28.144 05:13:57 -- scripts/common.sh@24 -- # return 0 00:31:28.144 05:13:57 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:28.144 05:13:57 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:28.144 05:13:57 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@18 -- # shift 00:31:28.144 05:13:57 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.144 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.144 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.144 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.145 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.145 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.145 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:28.146 05:13:57 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.146 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.146 05:13:57 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:28.146 05:13:57 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:28.147 05:13:57 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:28.147 05:13:57 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:28.147 05:13:57 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@18 -- # shift 00:31:28.147 05:13:57 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.147 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:28.147 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.147 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:28.148 05:13:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:28.148 05:13:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:28.148 05:13:57 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:28.148 05:13:57 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:28.148 05:13:57 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:28.148 05:13:57 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:28.148 05:13:57 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:28.148 05:13:57 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:28.148 05:13:57 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:28.148 05:13:57 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:28.148 05:13:57 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:28.148 05:13:57 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:28.148 05:13:57 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:28.148 05:13:57 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:28.148 05:13:57 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:28.148 05:13:57 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:28.148 05:13:57 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:28.148 05:13:57 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:28.148 05:13:57 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:28.148 05:13:57 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:28.148 05:13:57 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:28.148 05:13:57 -- nvme/functions.sh@197 -- # echo nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:28.148 05:13:57 -- nvme/functions.sh@206 -- # echo nvme0 00:31:28.148 05:13:57 -- nvme/functions.sh@207 -- # return 0 00:31:28.148 05:13:57 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:28.148 05:13:57 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:28.148 05:13:57 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:28.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:28.714 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:30.651 05:14:00 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:30.651 05:14:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:30.651 05:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.651 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.651 ************************************ 00:31:30.651 START TEST nvme_simple_copy 00:31:30.651 ************************************ 00:31:30.651 05:14:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:30.651 Initializing NVMe Controllers 00:31:30.651 Attaching to 0000:00:06.0 00:31:30.651 Controller supports SCC. Attached to 0000:00:06.0 00:31:30.651 Namespace ID: 1 size: 5GB 00:31:30.651 Initialization complete. 00:31:30.651 00:31:30.651 Controller QEMU NVMe Ctrl (12340 ) 00:31:30.651 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:30.651 Namespace Block Size:4096 00:31:30.651 Writing LBAs 0 to 63 with Random Data 00:31:30.651 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:30.651 LBAs matching Written Data: 64 00:31:30.651 00:31:30.651 real 0m0.296s 00:31:30.651 user 0m0.117s 00:31:30.651 sys 0m0.080s 00:31:30.651 05:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.651 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.651 ************************************ 00:31:30.651 END TEST nvme_simple_copy 00:31:30.651 ************************************ 00:31:30.651 00:31:30.651 real 0m3.092s 00:31:30.651 user 0m0.803s 00:31:30.651 sys 0m2.173s 00:31:30.651 05:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.651 ************************************ 00:31:30.651 END TEST nvme_scc 00:31:30.651 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.651 ************************************ 00:31:30.651 05:14:00 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:30.651 05:14:00 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:30.651 05:14:00 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:30.651 05:14:00 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:30.651 05:14:00 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:30.651 05:14:00 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:30.651 05:14:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:30.651 05:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.651 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.651 ************************************ 00:31:30.651 START TEST nvme_rpc 00:31:30.651 ************************************ 00:31:30.651 05:14:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:30.651 * Looking for test storage... 00:31:30.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:30.651 05:14:00 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:30.651 05:14:00 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:30.651 05:14:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:30.651 05:14:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:30.651 05:14:00 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:30.651 05:14:00 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:30.651 05:14:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:30.651 05:14:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:30.651 05:14:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:30.651 05:14:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:30.651 05:14:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:30.909 05:14:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:30.909 05:14:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:30.909 05:14:00 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:30.909 05:14:00 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:30.909 05:14:00 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=152257 00:31:30.909 05:14:00 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:30.909 05:14:00 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:30.909 05:14:00 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 152257 00:31:30.909 05:14:00 -- common/autotest_common.sh@819 -- # '[' -z 152257 ']' 00:31:30.909 05:14:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.909 05:14:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.909 05:14:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.909 05:14:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.909 05:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:30.909 [2024-04-27 05:14:00.662965] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:30.909 [2024-04-27 05:14:00.663233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152257 ] 00:31:31.168 [2024-04-27 05:14:00.840151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:31.168 [2024-04-27 05:14:00.964665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:31.168 [2024-04-27 05:14:00.965308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.168 [2024-04-27 05:14:00.965367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.735 05:14:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:31.735 05:14:01 -- common/autotest_common.sh@852 -- # return 0 00:31:31.735 05:14:01 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:32.301 Nvme0n1 00:31:32.302 05:14:01 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:32.302 05:14:01 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:32.302 request: 00:31:32.302 { 00:31:32.302 "filename": "non_existing_file", 00:31:32.302 "bdev_name": "Nvme0n1", 00:31:32.302 "method": "bdev_nvme_apply_firmware", 00:31:32.302 "req_id": 1 00:31:32.302 } 00:31:32.302 Got JSON-RPC error response 00:31:32.302 response: 00:31:32.302 { 00:31:32.302 "code": -32603, 00:31:32.302 "message": "open file failed." 00:31:32.302 } 00:31:32.302 05:14:02 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:32.302 05:14:02 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:32.302 05:14:02 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:32.560 05:14:02 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:32.560 05:14:02 -- nvme/nvme_rpc.sh@40 -- # killprocess 152257 00:31:32.560 05:14:02 -- common/autotest_common.sh@926 -- # '[' -z 152257 ']' 00:31:32.560 05:14:02 -- common/autotest_common.sh@930 -- # kill -0 152257 00:31:32.560 05:14:02 -- common/autotest_common.sh@931 -- # uname 00:31:32.560 05:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.560 05:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152257 00:31:32.560 05:14:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:32.560 05:14:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:32.560 killing process with pid 152257 00:31:32.560 05:14:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152257' 00:31:32.560 05:14:02 -- common/autotest_common.sh@945 -- # kill 152257 00:31:32.560 05:14:02 -- common/autotest_common.sh@950 -- # wait 152257 00:31:33.510 00:31:33.510 real 0m2.675s 00:31:33.510 user 0m5.161s 00:31:33.510 sys 0m0.694s 00:31:33.510 05:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.510 05:14:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.510 ************************************ 00:31:33.510 END TEST nvme_rpc 00:31:33.510 ************************************ 00:31:33.510 05:14:03 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:33.510 05:14:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:33.510 05:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:33.510 05:14:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.510 ************************************ 00:31:33.510 START TEST nvme_rpc_timeouts 00:31:33.510 ************************************ 00:31:33.510 05:14:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:33.510 * Looking for test storage... 00:31:33.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_152328 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_152328 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=152352 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 152352 00:31:33.510 05:14:03 -- common/autotest_common.sh@819 -- # '[' -z 152352 ']' 00:31:33.510 05:14:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.510 05:14:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:33.510 05:14:03 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:33.510 05:14:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.510 05:14:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:33.510 05:14:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.510 [2024-04-27 05:14:03.345037] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:33.510 [2024-04-27 05:14:03.345633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152352 ] 00:31:33.768 [2024-04-27 05:14:03.518941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:33.768 [2024-04-27 05:14:03.639777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:33.768 [2024-04-27 05:14:03.640490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.768 [2024-04-27 05:14:03.640536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.700 05:14:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:34.701 05:14:04 -- common/autotest_common.sh@852 -- # return 0 00:31:34.701 Checking default timeout settings: 00:31:34.701 05:14:04 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:34.701 05:14:04 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:34.959 Making settings changes with rpc: 00:31:34.959 05:14:04 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:34.959 05:14:04 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:34.959 Check default vs. modified settings: 00:31:34.959 05:14:04 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:34.959 05:14:04 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:35.527 Setting action_on_timeout is changed as expected. 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:35.527 Setting timeout_us is changed as expected. 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_152328 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:35.527 05:14:05 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:35.527 Setting timeout_admin_us is changed as expected. 00:31:35.528 05:14:05 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:35.528 05:14:05 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:35.528 05:14:05 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_152328 /tmp/settings_modified_152328 00:31:35.528 05:14:05 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 152352 00:31:35.528 05:14:05 -- common/autotest_common.sh@926 -- # '[' -z 152352 ']' 00:31:35.528 05:14:05 -- common/autotest_common.sh@930 -- # kill -0 152352 00:31:35.528 05:14:05 -- common/autotest_common.sh@931 -- # uname 00:31:35.528 05:14:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:35.528 05:14:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152352 00:31:35.528 05:14:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:35.528 05:14:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:35.528 killing process with pid 152352 00:31:35.528 05:14:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152352' 00:31:35.528 05:14:05 -- common/autotest_common.sh@945 -- # kill 152352 00:31:35.528 05:14:05 -- common/autotest_common.sh@950 -- # wait 152352 00:31:36.095 RPC TIMEOUT SETTING TEST PASSED. 00:31:36.095 05:14:05 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:36.095 00:31:36.095 real 0m2.706s 00:31:36.095 user 0m5.358s 00:31:36.095 sys 0m0.682s 00:31:36.095 05:14:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.095 05:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:36.095 ************************************ 00:31:36.095 END TEST nvme_rpc_timeouts 00:31:36.095 ************************************ 00:31:36.095 05:14:05 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:36.095 05:14:05 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:36.095 05:14:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:36.095 05:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:36.095 05:14:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:36.095 05:14:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:36.095 05:14:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:36.095 05:14:05 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:36.095 05:14:05 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:36.095 05:14:05 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:36.095 05:14:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:36.095 05:14:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.095 05:14:05 -- common/autotest_common.sh@10 -- # set +x 00:31:36.095 ************************************ 00:31:36.095 START TEST blockdev_raid5f 00:31:36.095 ************************************ 00:31:36.095 05:14:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:36.353 * Looking for test storage... 00:31:36.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:36.353 05:14:06 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:36.353 05:14:06 -- bdev/nbd_common.sh@6 -- # set -e 00:31:36.353 05:14:06 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:36.353 05:14:06 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:36.353 05:14:06 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:36.353 05:14:06 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:36.353 05:14:06 -- bdev/blockdev.sh@18 -- # : 00:31:36.353 05:14:06 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:36.353 05:14:06 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:36.353 05:14:06 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:36.353 05:14:06 -- bdev/blockdev.sh@672 -- # uname -s 00:31:36.353 05:14:06 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:36.353 05:14:06 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:36.353 05:14:06 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:36.353 05:14:06 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:36.353 05:14:06 -- bdev/blockdev.sh@682 -- # dek= 00:31:36.353 05:14:06 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:36.353 05:14:06 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:36.353 05:14:06 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:36.353 05:14:06 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:36.353 05:14:06 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:36.353 05:14:06 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:36.353 05:14:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=152495 00:31:36.353 05:14:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:36.353 05:14:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:36.353 05:14:06 -- bdev/blockdev.sh@47 -- # waitforlisten 152495 00:31:36.353 05:14:06 -- common/autotest_common.sh@819 -- # '[' -z 152495 ']' 00:31:36.353 05:14:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.353 05:14:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:36.353 05:14:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.353 05:14:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:36.353 05:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:36.353 [2024-04-27 05:14:06.142036] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:36.353 [2024-04-27 05:14:06.142274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152495 ] 00:31:36.612 [2024-04-27 05:14:06.308695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.612 [2024-04-27 05:14:06.399137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:36.612 [2024-04-27 05:14:06.399369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.179 05:14:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:37.179 05:14:07 -- common/autotest_common.sh@852 -- # return 0 00:31:37.179 05:14:07 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:37.179 05:14:07 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:37.179 05:14:07 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:37.179 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.179 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.179 Malloc0 00:31:37.438 Malloc1 00:31:37.438 Malloc2 00:31:37.438 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.438 05:14:07 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:37.438 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.438 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.438 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.438 05:14:07 -- bdev/blockdev.sh@738 -- # cat 00:31:37.438 05:14:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:37.438 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.438 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.438 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.438 05:14:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:37.438 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.438 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.438 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.438 05:14:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:37.438 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.438 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.439 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.439 05:14:07 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:37.439 05:14:07 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:37.439 05:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.439 05:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:37.439 05:14:07 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:37.439 05:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.439 05:14:07 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:37.439 05:14:07 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:37.439 05:14:07 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b389325a-f1d9-4255-9da6-806650b7a4d2"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b389325a-f1d9-4255-9da6-806650b7a4d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b389325a-f1d9-4255-9da6-806650b7a4d2",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6580e455-f7e7-47e2-af5a-d10af50beb17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5607af92-273c-49d3-b4c2-a313b1dccb0f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fea55716-e1f4-4395-b02f-589e3336bfc7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:37.439 05:14:07 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:37.439 05:14:07 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:37.439 05:14:07 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:37.439 05:14:07 -- bdev/blockdev.sh@752 -- # killprocess 152495 00:31:37.439 05:14:07 -- common/autotest_common.sh@926 -- # '[' -z 152495 ']' 00:31:37.439 05:14:07 -- common/autotest_common.sh@930 -- # kill -0 152495 00:31:37.439 05:14:07 -- common/autotest_common.sh@931 -- # uname 00:31:37.439 05:14:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:37.439 05:14:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152495 00:31:37.439 05:14:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:37.439 killing process with pid 152495 00:31:37.439 05:14:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:37.439 05:14:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152495' 00:31:37.439 05:14:07 -- common/autotest_common.sh@945 -- # kill 152495 00:31:37.439 05:14:07 -- common/autotest_common.sh@950 -- # wait 152495 00:31:38.372 05:14:08 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:38.372 05:14:08 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:38.372 05:14:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:38.372 05:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:38.372 05:14:08 -- common/autotest_common.sh@10 -- # set +x 00:31:38.372 ************************************ 00:31:38.372 START TEST bdev_hello_world 00:31:38.372 ************************************ 00:31:38.372 05:14:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:38.372 [2024-04-27 05:14:08.078751] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:38.372 [2024-04-27 05:14:08.079023] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152533 ] 00:31:38.372 [2024-04-27 05:14:08.247900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.630 [2024-04-27 05:14:08.320007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.888 [2024-04-27 05:14:08.594935] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:38.888 [2024-04-27 05:14:08.595054] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:38.888 [2024-04-27 05:14:08.595095] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:38.888 [2024-04-27 05:14:08.595583] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:38.888 [2024-04-27 05:14:08.595800] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:38.888 [2024-04-27 05:14:08.595880] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:38.889 [2024-04-27 05:14:08.596017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:38.889 00:31:38.889 [2024-04-27 05:14:08.596102] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:39.147 00:31:39.147 real 0m0.986s 00:31:39.147 user 0m0.565s 00:31:39.147 sys 0m0.303s 00:31:39.147 05:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.147 05:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:39.147 ************************************ 00:31:39.147 END TEST bdev_hello_world 00:31:39.147 ************************************ 00:31:39.147 05:14:09 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:39.147 05:14:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:39.147 05:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.147 05:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:39.147 ************************************ 00:31:39.147 START TEST bdev_bounds 00:31:39.147 ************************************ 00:31:39.147 05:14:09 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:39.147 05:14:09 -- bdev/blockdev.sh@288 -- # bdevio_pid=152571 00:31:39.147 05:14:09 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:39.147 Process bdevio pid: 152571 00:31:39.147 05:14:09 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 152571' 00:31:39.147 05:14:09 -- bdev/blockdev.sh@291 -- # waitforlisten 152571 00:31:39.147 05:14:09 -- common/autotest_common.sh@819 -- # '[' -z 152571 ']' 00:31:39.147 05:14:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.147 05:14:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:39.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.147 05:14:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.147 05:14:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:39.147 05:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:39.147 05:14:09 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:39.405 [2024-04-27 05:14:09.125634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:39.406 [2024-04-27 05:14:09.126102] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152571 ] 00:31:39.406 [2024-04-27 05:14:09.303755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:39.665 [2024-04-27 05:14:09.391973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.665 [2024-04-27 05:14:09.392100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.665 [2024-04-27 05:14:09.392099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.231 05:14:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:40.231 05:14:10 -- common/autotest_common.sh@852 -- # return 0 00:31:40.231 05:14:10 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:40.489 I/O targets: 00:31:40.489 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:40.489 00:31:40.489 00:31:40.489 CUnit - A unit testing framework for C - Version 2.1-3 00:31:40.489 http://cunit.sourceforge.net/ 00:31:40.489 00:31:40.489 00:31:40.489 Suite: bdevio tests on: raid5f 00:31:40.489 Test: blockdev write read block ...passed 00:31:40.489 Test: blockdev write zeroes read block ...passed 00:31:40.489 Test: blockdev write zeroes read no split ...passed 00:31:40.489 Test: blockdev write zeroes read split ...passed 00:31:40.489 Test: blockdev write zeroes read split partial ...passed 00:31:40.489 Test: blockdev reset ...passed 00:31:40.489 Test: blockdev write read 8 blocks ...passed 00:31:40.489 Test: blockdev write read size > 128k ...passed 00:31:40.489 Test: blockdev write read invalid size ...passed 00:31:40.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:40.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:40.489 Test: blockdev write read max offset ...passed 00:31:40.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:40.489 Test: blockdev writev readv 8 blocks ...passed 00:31:40.489 Test: blockdev writev readv 30 x 1block ...passed 00:31:40.489 Test: blockdev writev readv block ...passed 00:31:40.489 Test: blockdev writev readv size > 128k ...passed 00:31:40.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:40.489 Test: blockdev comparev and writev ...passed 00:31:40.489 Test: blockdev nvme passthru rw ...passed 00:31:40.489 Test: blockdev nvme passthru vendor specific ...passed 00:31:40.489 Test: blockdev nvme admin passthru ...passed 00:31:40.489 Test: blockdev copy ...passed 00:31:40.489 00:31:40.489 Run Summary: Type Total Ran Passed Failed Inactive 00:31:40.489 suites 1 1 n/a 0 0 00:31:40.489 tests 23 23 23 0 0 00:31:40.489 asserts 130 130 130 0 n/a 00:31:40.489 00:31:40.489 Elapsed time = 0.315 seconds 00:31:40.489 0 00:31:40.489 05:14:10 -- bdev/blockdev.sh@293 -- # killprocess 152571 00:31:40.489 05:14:10 -- common/autotest_common.sh@926 -- # '[' -z 152571 ']' 00:31:40.490 05:14:10 -- common/autotest_common.sh@930 -- # kill -0 152571 00:31:40.490 05:14:10 -- common/autotest_common.sh@931 -- # uname 00:31:40.490 05:14:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:40.747 05:14:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152571 00:31:40.747 05:14:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:40.747 killing process with pid 152571 00:31:40.747 05:14:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:40.747 05:14:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152571' 00:31:40.747 05:14:10 -- common/autotest_common.sh@945 -- # kill 152571 00:31:40.747 05:14:10 -- common/autotest_common.sh@950 -- # wait 152571 00:31:41.005 05:14:10 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:41.005 00:31:41.005 real 0m1.776s 00:31:41.005 user 0m4.306s 00:31:41.005 sys 0m0.486s 00:31:41.005 ************************************ 00:31:41.005 END TEST bdev_bounds 00:31:41.005 ************************************ 00:31:41.005 05:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.005 05:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:41.005 05:14:10 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:41.005 05:14:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:41.005 05:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:41.005 05:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:41.005 ************************************ 00:31:41.005 START TEST bdev_nbd 00:31:41.005 ************************************ 00:31:41.005 05:14:10 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:41.005 05:14:10 -- bdev/blockdev.sh@298 -- # uname -s 00:31:41.005 05:14:10 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:41.005 05:14:10 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.005 05:14:10 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:41.005 05:14:10 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:31:41.005 05:14:10 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:41.005 05:14:10 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:41.005 05:14:10 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:41.005 05:14:10 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:41.005 05:14:10 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:41.005 05:14:10 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:41.005 05:14:10 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:31:41.005 05:14:10 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:41.005 05:14:10 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:31:41.005 05:14:10 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:41.005 05:14:10 -- bdev/blockdev.sh@316 -- # nbd_pid=152628 00:31:41.005 05:14:10 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:41.005 05:14:10 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:41.005 05:14:10 -- bdev/blockdev.sh@318 -- # waitforlisten 152628 /var/tmp/spdk-nbd.sock 00:31:41.005 05:14:10 -- common/autotest_common.sh@819 -- # '[' -z 152628 ']' 00:31:41.005 05:14:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:41.005 05:14:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:41.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:41.005 05:14:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:41.005 05:14:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:41.005 05:14:10 -- common/autotest_common.sh@10 -- # set +x 00:31:41.263 [2024-04-27 05:14:10.968369] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:41.263 [2024-04-27 05:14:10.968636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.263 [2024-04-27 05:14:11.138631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.521 [2024-04-27 05:14:11.236468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.087 05:14:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:42.087 05:14:11 -- common/autotest_common.sh@852 -- # return 0 00:31:42.087 05:14:11 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@24 -- # local i 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:42.087 05:14:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:42.346 05:14:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:42.346 05:14:12 -- common/autotest_common.sh@857 -- # local i 00:31:42.346 05:14:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:42.346 05:14:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:42.346 05:14:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:42.346 05:14:12 -- common/autotest_common.sh@861 -- # break 00:31:42.346 05:14:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:42.346 05:14:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:42.346 05:14:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:42.346 1+0 records in 00:31:42.346 1+0 records out 00:31:42.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035924 s, 11.4 MB/s 00:31:42.346 05:14:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.346 05:14:12 -- common/autotest_common.sh@874 -- # size=4096 00:31:42.346 05:14:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.346 05:14:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:42.346 05:14:12 -- common/autotest_common.sh@877 -- # return 0 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:42.346 05:14:12 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:42.604 { 00:31:42.604 "nbd_device": "/dev/nbd0", 00:31:42.604 "bdev_name": "raid5f" 00:31:42.604 } 00:31:42.604 ]' 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:42.604 { 00:31:42.604 "nbd_device": "/dev/nbd0", 00:31:42.604 "bdev_name": "raid5f" 00:31:42.604 } 00:31:42.604 ]' 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@51 -- # local i 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:42.604 05:14:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@41 -- # break 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@45 -- # return 0 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.862 05:14:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@65 -- # true 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@65 -- # count=0 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@122 -- # count=0 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@127 -- # return 0 00:31:43.120 05:14:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@12 -- # local i 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:43.120 05:14:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:43.377 /dev/nbd0 00:31:43.377 05:14:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:43.377 05:14:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:43.377 05:14:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:43.377 05:14:13 -- common/autotest_common.sh@857 -- # local i 00:31:43.377 05:14:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:43.377 05:14:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:43.377 05:14:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:43.377 05:14:13 -- common/autotest_common.sh@861 -- # break 00:31:43.377 05:14:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:43.377 05:14:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:43.377 05:14:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:43.377 1+0 records in 00:31:43.377 1+0 records out 00:31:43.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339033 s, 12.1 MB/s 00:31:43.377 05:14:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.377 05:14:13 -- common/autotest_common.sh@874 -- # size=4096 00:31:43.377 05:14:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.377 05:14:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:43.377 05:14:13 -- common/autotest_common.sh@877 -- # return 0 00:31:43.377 05:14:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:43.377 05:14:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:43.377 05:14:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:43.378 05:14:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.378 05:14:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:43.635 05:14:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:43.635 { 00:31:43.635 "nbd_device": "/dev/nbd0", 00:31:43.635 "bdev_name": "raid5f" 00:31:43.635 } 00:31:43.635 ]' 00:31:43.635 05:14:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:43.635 { 00:31:43.635 "nbd_device": "/dev/nbd0", 00:31:43.635 "bdev_name": "raid5f" 00:31:43.635 } 00:31:43.635 ]' 00:31:43.635 05:14:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@65 -- # count=1 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@95 -- # count=1 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:43.894 256+0 records in 00:31:43.894 256+0 records out 00:31:43.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00833653 s, 126 MB/s 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:43.894 256+0 records in 00:31:43.894 256+0 records out 00:31:43.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307056 s, 34.1 MB/s 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@51 -- # local i 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:43.894 05:14:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@41 -- # break 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@45 -- # return 0 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:44.154 05:14:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@65 -- # true 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@65 -- # count=0 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@104 -- # count=0 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@109 -- # return 0 00:31:44.413 05:14:14 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:44.413 05:14:14 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:44.672 malloc_lvol_verify 00:31:44.672 05:14:14 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:44.931 b6b3c4df-8a6b-4b59-8e87-239e9f900a67 00:31:44.931 05:14:14 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:45.191 79d271a2-0d48-4334-bbb1-7adab5c27014 00:31:45.191 05:14:14 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:45.449 /dev/nbd0 00:31:45.449 05:14:15 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:45.449 mke2fs 1.46.5 (30-Dec-2021) 00:31:45.449 00:31:45.449 Filesystem too small for a journal 00:31:45.450 Discarding device blocks: 0/1024 done 00:31:45.450 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:45.450 00:31:45.450 Allocating group tables: 0/1 done 00:31:45.450 Writing inode tables: 0/1 done 00:31:45.450 Writing superblocks and filesystem accounting information: 0/1 done 00:31:45.450 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@51 -- # local i 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:45.450 05:14:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:45.708 05:14:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:45.708 05:14:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@41 -- # break 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@45 -- # return 0 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:45.709 05:14:15 -- bdev/nbd_common.sh@147 -- # return 0 00:31:45.709 05:14:15 -- bdev/blockdev.sh@324 -- # killprocess 152628 00:31:45.709 05:14:15 -- common/autotest_common.sh@926 -- # '[' -z 152628 ']' 00:31:45.709 05:14:15 -- common/autotest_common.sh@930 -- # kill -0 152628 00:31:45.709 05:14:15 -- common/autotest_common.sh@931 -- # uname 00:31:45.709 05:14:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:45.709 05:14:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152628 00:31:45.709 05:14:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:45.709 05:14:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:45.709 05:14:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152628' 00:31:45.709 killing process with pid 152628 00:31:45.709 05:14:15 -- common/autotest_common.sh@945 -- # kill 152628 00:31:45.709 05:14:15 -- common/autotest_common.sh@950 -- # wait 152628 00:31:45.967 05:14:15 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:45.967 00:31:45.967 real 0m4.987s 00:31:45.967 user 0m7.398s 00:31:45.967 sys 0m1.265s 00:31:45.967 05:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.967 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.967 ************************************ 00:31:45.967 END TEST bdev_nbd 00:31:45.967 ************************************ 00:31:46.226 05:14:15 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:46.226 05:14:15 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.226 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:46.226 ************************************ 00:31:46.226 START TEST bdev_fio 00:31:46.226 ************************************ 00:31:46.226 05:14:15 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@329 -- # local env_context 00:31:46.226 05:14:15 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:46.226 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:46.226 05:14:15 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:46.226 05:14:15 -- bdev/blockdev.sh@337 -- # echo '' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:46.226 05:14:15 -- bdev/blockdev.sh@337 -- # env_context= 00:31:46.226 05:14:15 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:46.226 05:14:15 -- common/autotest_common.sh@1260 -- # local workload=verify 00:31:46.226 05:14:15 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:31:46.226 05:14:15 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:46.226 05:14:15 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:46.226 05:14:15 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:46.226 05:14:15 -- common/autotest_common.sh@1280 -- # cat 00:31:46.226 05:14:15 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1293 -- # cat 00:31:46.226 05:14:15 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:31:46.226 05:14:15 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:46.226 05:14:15 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:31:46.226 05:14:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:46.226 05:14:15 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:46.226 05:14:15 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:46.226 05:14:15 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:46.226 05:14:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:31:46.226 05:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.226 05:14:15 -- common/autotest_common.sh@10 -- # set +x 00:31:46.226 ************************************ 00:31:46.226 START TEST bdev_fio_rw_verify 00:31:46.226 ************************************ 00:31:46.226 05:14:16 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:46.226 05:14:16 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:46.226 05:14:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:46.226 05:14:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.227 05:14:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:46.227 05:14:16 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:46.227 05:14:16 -- common/autotest_common.sh@1320 -- # shift 00:31:46.227 05:14:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:46.227 05:14:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.227 05:14:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:46.227 05:14:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:46.227 05:14:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:46.227 05:14:16 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:46.227 05:14:16 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:46.227 05:14:16 -- common/autotest_common.sh@1326 -- # break 00:31:46.227 05:14:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:46.227 05:14:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:46.485 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:46.485 fio-3.35 00:31:46.485 Starting 1 thread 00:31:58.684 00:31:58.684 job_raid5f: (groupid=0, jobs=1): err= 0: pid=152862: Sat Apr 27 05:14:26 2024 00:31:58.684 read: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(427MiB/10001msec) 00:31:58.684 slat (usec): min=19, max=925, avg=22.41, stdev= 5.33 00:31:58.684 clat (usec): min=10, max=1171, avg=145.72, stdev=55.79 00:31:58.684 lat (usec): min=31, max=1194, avg=168.13, stdev=56.81 00:31:58.684 clat percentiles (usec): 00:31:58.684 | 50.000th=[ 147], 99.000th=[ 273], 99.900th=[ 322], 99.990th=[ 363], 00:31:58.684 | 99.999th=[ 1156] 00:31:58.684 write: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(441MiB/9878msec); 0 zone resets 00:31:58.684 slat (usec): min=9, max=257, avg=19.23, stdev= 4.89 00:31:58.684 clat (usec): min=57, max=1470, avg=329.49, stdev=54.70 00:31:58.684 lat (usec): min=74, max=1489, avg=348.72, stdev=56.47 00:31:58.684 clat percentiles (usec): 00:31:58.684 | 50.000th=[ 330], 99.000th=[ 486], 99.900th=[ 594], 99.990th=[ 1057], 00:31:58.684 | 99.999th=[ 1450] 00:31:58.684 bw ( KiB/s): min=42104, max=50984, per=98.85%, avg=45229.05, stdev=2599.84, samples=19 00:31:58.684 iops : min=10526, max=12746, avg=11307.26, stdev=649.96, samples=19 00:31:58.684 lat (usec) : 20=0.01%, 50=0.01%, 100=12.41%, 250=38.70%, 500=48.51% 00:31:58.684 lat (usec) : 750=0.35%, 1000=0.02% 00:31:58.684 lat (msec) : 2=0.01% 00:31:58.684 cpu : usr=99.12%, sys=0.84%, ctx=74, majf=0, minf=10831 00:31:58.684 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.684 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.684 issued rwts: total=109328,112989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.684 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:58.684 00:31:58.684 Run status group 0 (all jobs): 00:31:58.684 READ: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=427MiB (448MB), run=10001-10001msec 00:31:58.684 WRITE: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=441MiB (463MB), run=9878-9878msec 00:31:58.684 ----------------------------------------------------- 00:31:58.684 Suppressions used: 00:31:58.684 count bytes template 00:31:58.685 1 7 /usr/src/fio/parse.c 00:31:58.685 268 25728 /usr/src/fio/iolog.c 00:31:58.685 1 904 libcrypto.so 00:31:58.685 ----------------------------------------------------- 00:31:58.685 00:31:58.685 00:31:58.685 real 0m11.422s 00:31:58.685 user 0m11.851s 00:31:58.685 sys 0m0.725s 00:31:58.685 05:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.685 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:31:58.685 ************************************ 00:31:58.685 END TEST bdev_fio_rw_verify 00:31:58.685 ************************************ 00:31:58.685 05:14:27 -- bdev/blockdev.sh@348 -- # rm -f 00:31:58.685 05:14:27 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.685 05:14:27 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.685 05:14:27 -- common/autotest_common.sh@1260 -- # local workload=trim 00:31:58.685 05:14:27 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:31:58.685 05:14:27 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:58.685 05:14:27 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:58.685 05:14:27 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.685 05:14:27 -- common/autotest_common.sh@1280 -- # cat 00:31:58.685 05:14:27 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:31:58.685 05:14:27 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:58.685 05:14:27 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "b389325a-f1d9-4255-9da6-806650b7a4d2"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b389325a-f1d9-4255-9da6-806650b7a4d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "b389325a-f1d9-4255-9da6-806650b7a4d2",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6580e455-f7e7-47e2-af5a-d10af50beb17",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5607af92-273c-49d3-b4c2-a313b1dccb0f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fea55716-e1f4-4395-b02f-589e3336bfc7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:58.685 05:14:27 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:31:58.685 05:14:27 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.685 /home/vagrant/spdk_repo/spdk 00:31:58.685 05:14:27 -- bdev/blockdev.sh@360 -- # popd 00:31:58.685 05:14:27 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:31:58.685 05:14:27 -- bdev/blockdev.sh@362 -- # return 0 00:31:58.685 00:31:58.685 real 0m11.599s 00:31:58.685 user 0m11.967s 00:31:58.685 sys 0m0.784s 00:31:58.685 05:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.685 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:31:58.685 ************************************ 00:31:58.685 END TEST bdev_fio 00:31:58.685 ************************************ 00:31:58.685 05:14:27 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:58.685 05:14:27 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:31:58.685 05:14:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.685 05:14:27 -- common/autotest_common.sh@10 -- # set +x 00:31:58.685 ************************************ 00:31:58.685 START TEST bdev_verify 00:31:58.685 ************************************ 00:31:58.685 05:14:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:58.685 [2024-04-27 05:14:27.658272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:58.685 [2024-04-27 05:14:27.658520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153025 ] 00:31:58.685 [2024-04-27 05:14:27.830420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:58.685 [2024-04-27 05:14:27.931326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.685 [2024-04-27 05:14:27.931327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.685 Running I/O for 5 seconds... 00:32:03.946 00:32:03.946 Latency(us) 00:32:03.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.946 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:03.946 Verification LBA range: start 0x0 length 0x2000 00:32:03.946 raid5f : 5.01 12092.07 47.23 0.00 0.00 16771.84 271.83 13881.72 00:32:03.946 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:03.946 Verification LBA range: start 0x2000 length 0x2000 00:32:03.946 raid5f : 5.01 12151.41 47.47 0.00 0.00 16692.86 288.58 13822.14 00:32:03.946 =================================================================================================================== 00:32:03.946 Total : 24243.49 94.70 0.00 0.00 16732.26 271.83 13881.72 00:32:03.946 00:32:03.946 real 0m5.960s 00:32:03.946 user 0m11.040s 00:32:03.946 sys 0m0.308s 00:32:03.946 05:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.946 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:32:03.946 ************************************ 00:32:03.946 END TEST bdev_verify 00:32:03.946 ************************************ 00:32:03.946 05:14:33 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:03.946 05:14:33 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:03.946 05:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:03.946 05:14:33 -- common/autotest_common.sh@10 -- # set +x 00:32:03.946 ************************************ 00:32:03.946 START TEST bdev_verify_big_io 00:32:03.946 ************************************ 00:32:03.946 05:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:03.946 [2024-04-27 05:14:33.666979] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:03.946 [2024-04-27 05:14:33.667226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153123 ] 00:32:03.946 [2024-04-27 05:14:33.833920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:04.210 [2024-04-27 05:14:33.906951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.210 [2024-04-27 05:14:33.906953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.468 Running I/O for 5 seconds... 00:32:09.732 00:32:09.732 Latency(us) 00:32:09.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.732 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:09.732 Verification LBA range: start 0x0 length 0x200 00:32:09.732 raid5f : 5.12 833.25 52.08 0.00 0.00 4018844.36 134.05 127735.62 00:32:09.732 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:09.732 Verification LBA range: start 0x200 length 0x200 00:32:09.732 raid5f : 5.12 835.12 52.19 0.00 0.00 4007781.20 238.31 126782.37 00:32:09.732 =================================================================================================================== 00:32:09.732 Total : 1668.37 104.27 0.00 0.00 4013307.60 134.05 127735.62 00:32:09.732 00:32:09.732 real 0m6.044s 00:32:09.732 user 0m11.213s 00:32:09.732 sys 0m0.324s 00:32:09.732 05:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.732 ************************************ 00:32:09.732 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:32:09.732 END TEST bdev_verify_big_io 00:32:09.732 ************************************ 00:32:09.993 05:14:39 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:09.993 05:14:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:09.993 05:14:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.993 05:14:39 -- common/autotest_common.sh@10 -- # set +x 00:32:09.993 ************************************ 00:32:09.993 START TEST bdev_write_zeroes 00:32:09.993 ************************************ 00:32:09.993 05:14:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:09.993 [2024-04-27 05:14:39.763395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:09.993 [2024-04-27 05:14:39.763663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153217 ] 00:32:10.251 [2024-04-27 05:14:39.932598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.251 [2024-04-27 05:14:39.999723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.510 Running I/O for 1 seconds... 00:32:11.446 00:32:11.446 Latency(us) 00:32:11.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.447 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:11.447 raid5f : 1.00 26542.21 103.68 0.00 0.00 4807.16 1563.93 6136.55 00:32:11.447 =================================================================================================================== 00:32:11.447 Total : 26542.21 103.68 0.00 0.00 4807.16 1563.93 6136.55 00:32:11.705 00:32:11.705 real 0m1.924s 00:32:11.705 user 0m1.515s 00:32:11.705 sys 0m0.296s 00:32:11.705 05:14:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.705 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:32:11.705 ************************************ 00:32:11.705 END TEST bdev_write_zeroes 00:32:11.705 ************************************ 00:32:11.964 05:14:41 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:11.964 05:14:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:11.964 05:14:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.964 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:32:11.964 ************************************ 00:32:11.964 START TEST bdev_json_nonenclosed 00:32:11.964 ************************************ 00:32:11.964 05:14:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:11.964 [2024-04-27 05:14:41.735096] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:11.964 [2024-04-27 05:14:41.735281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153255 ] 00:32:12.223 [2024-04-27 05:14:41.893802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.223 [2024-04-27 05:14:42.000548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.223 [2024-04-27 05:14:42.000838] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:12.223 [2024-04-27 05:14:42.000887] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:12.223 00:32:12.223 real 0m0.455s 00:32:12.223 user 0m0.239s 00:32:12.223 sys 0m0.116s 00:32:12.223 05:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.223 ************************************ 00:32:12.223 END TEST bdev_json_nonenclosed 00:32:12.223 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:12.223 ************************************ 00:32:12.483 05:14:42 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.483 05:14:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:12.483 05:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.483 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:12.483 ************************************ 00:32:12.483 START TEST bdev_json_nonarray 00:32:12.483 ************************************ 00:32:12.483 05:14:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.483 [2024-04-27 05:14:42.250871] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:12.483 [2024-04-27 05:14:42.251103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153294 ] 00:32:12.741 [2024-04-27 05:14:42.420530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.741 [2024-04-27 05:14:42.506622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.741 [2024-04-27 05:14:42.506880] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:12.741 [2024-04-27 05:14:42.506933] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:12.741 00:32:12.741 real 0m0.431s 00:32:12.741 user 0m0.207s 00:32:12.741 sys 0m0.125s 00:32:12.741 05:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.741 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:12.741 ************************************ 00:32:12.741 END TEST bdev_json_nonarray 00:32:12.741 ************************************ 00:32:12.998 05:14:42 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:12.998 05:14:42 -- bdev/blockdev.sh@809 -- # cleanup 00:32:12.998 05:14:42 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:12.998 05:14:42 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:12.998 05:14:42 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:12.998 05:14:42 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:12.998 00:32:12.998 real 0m36.683s 00:32:12.998 user 0m50.664s 00:32:12.998 sys 0m4.827s 00:32:12.998 05:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.998 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 ************************************ 00:32:12.998 END TEST blockdev_raid5f 00:32:12.998 ************************************ 00:32:12.998 05:14:42 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:12.998 05:14:42 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:12.998 05:14:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.998 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 05:14:42 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:12.998 05:14:42 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:12.998 05:14:42 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:12.998 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:32:14.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:14.373 Waiting for block devices as requested 00:32:14.373 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:14.941 Cleaning 00:32:14.941 Removing: /var/run/dpdk/spdk0/config 00:32:14.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:14.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:14.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:14.941 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:14.941 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:14.941 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:14.941 Removing: /dev/shm/spdk_tgt_trace.pid116180 00:32:14.941 Removing: /var/run/dpdk/spdk0 00:32:14.941 Removing: /var/run/dpdk/spdk_pid115988 00:32:14.941 Removing: /var/run/dpdk/spdk_pid116180 00:32:14.941 Removing: /var/run/dpdk/spdk_pid116448 00:32:14.941 Removing: /var/run/dpdk/spdk_pid116707 00:32:14.941 Removing: /var/run/dpdk/spdk_pid116886 00:32:14.941 Removing: /var/run/dpdk/spdk_pid116981 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117080 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117178 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117269 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117310 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117361 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117424 00:32:14.941 Removing: /var/run/dpdk/spdk_pid117550 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118068 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118131 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118191 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118212 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118307 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118331 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118419 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118440 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118497 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118520 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118574 00:32:14.941 Removing: /var/run/dpdk/spdk_pid118597 00:32:15.200 Removing: /var/run/dpdk/spdk_pid118754 00:32:15.200 Removing: /var/run/dpdk/spdk_pid118800 00:32:15.200 Removing: /var/run/dpdk/spdk_pid118836 00:32:15.200 Removing: /var/run/dpdk/spdk_pid118914 00:32:15.200 Removing: /var/run/dpdk/spdk_pid118998 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119037 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119120 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119150 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119195 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119232 00:32:15.200 Removing: /var/run/dpdk/spdk_pid119277 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119306 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119346 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119381 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119426 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119464 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119509 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119539 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119584 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119619 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119660 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119689 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119734 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119764 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119809 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119845 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119902 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119937 00:32:15.201 Removing: /var/run/dpdk/spdk_pid119995 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120031 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120077 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120106 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120151 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120185 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120219 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120256 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120302 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120331 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120369 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120402 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120452 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120490 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120526 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120561 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120601 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120636 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120678 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120758 00:32:15.201 Removing: /var/run/dpdk/spdk_pid120873 00:32:15.201 Removing: /var/run/dpdk/spdk_pid121024 00:32:15.201 Removing: /var/run/dpdk/spdk_pid121091 00:32:15.201 Removing: /var/run/dpdk/spdk_pid121130 00:32:15.201 Removing: /var/run/dpdk/spdk_pid122361 00:32:15.201 Removing: /var/run/dpdk/spdk_pid122558 00:32:15.201 Removing: /var/run/dpdk/spdk_pid122753 00:32:15.201 Removing: /var/run/dpdk/spdk_pid122857 00:32:15.201 Removing: /var/run/dpdk/spdk_pid122977 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123034 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123058 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123096 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123565 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123641 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123749 00:32:15.201 Removing: /var/run/dpdk/spdk_pid123797 00:32:15.201 Removing: /var/run/dpdk/spdk_pid124965 00:32:15.201 Removing: /var/run/dpdk/spdk_pid125867 00:32:15.201 Removing: /var/run/dpdk/spdk_pid126769 00:32:15.201 Removing: /var/run/dpdk/spdk_pid127896 00:32:15.201 Removing: /var/run/dpdk/spdk_pid128991 00:32:15.201 Removing: /var/run/dpdk/spdk_pid130079 00:32:15.201 Removing: /var/run/dpdk/spdk_pid131593 00:32:15.201 Removing: /var/run/dpdk/spdk_pid132816 00:32:15.201 Removing: /var/run/dpdk/spdk_pid134040 00:32:15.201 Removing: /var/run/dpdk/spdk_pid134732 00:32:15.201 Removing: /var/run/dpdk/spdk_pid135287 00:32:15.201 Removing: /var/run/dpdk/spdk_pid135916 00:32:15.201 Removing: /var/run/dpdk/spdk_pid136377 00:32:15.201 Removing: /var/run/dpdk/spdk_pid136957 00:32:15.201 Removing: /var/run/dpdk/spdk_pid137516 00:32:15.201 Removing: /var/run/dpdk/spdk_pid138183 00:32:15.201 Removing: /var/run/dpdk/spdk_pid138694 00:32:15.201 Removing: /var/run/dpdk/spdk_pid140085 00:32:15.201 Removing: /var/run/dpdk/spdk_pid140698 00:32:15.201 Removing: /var/run/dpdk/spdk_pid141237 00:32:15.201 Removing: /var/run/dpdk/spdk_pid142769 00:32:15.201 Removing: /var/run/dpdk/spdk_pid143451 00:32:15.201 Removing: /var/run/dpdk/spdk_pid144061 00:32:15.201 Removing: /var/run/dpdk/spdk_pid144838 00:32:15.201 Removing: /var/run/dpdk/spdk_pid144882 00:32:15.460 Removing: /var/run/dpdk/spdk_pid144921 00:32:15.460 Removing: /var/run/dpdk/spdk_pid144972 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145086 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145233 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145443 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145731 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145754 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145803 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145822 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145840 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145873 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145892 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145909 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145933 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145949 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145970 00:32:15.460 Removing: /var/run/dpdk/spdk_pid145997 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146016 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146031 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146061 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146081 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146097 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146121 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146137 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146158 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146198 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146221 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146252 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146321 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146364 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146384 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146418 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146430 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146446 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146499 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146518 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146555 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146570 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146587 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146606 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146619 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146636 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146653 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146670 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146709 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146744 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146765 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146796 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146816 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146827 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146886 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146905 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146942 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146957 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146974 00:32:15.460 Removing: /var/run/dpdk/spdk_pid146991 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147007 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147020 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147037 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147056 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147141 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147204 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147321 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147344 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147391 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147450 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147475 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147496 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147518 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147561 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147576 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147695 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147950 00:32:15.460 Removing: /var/run/dpdk/spdk_pid147995 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148247 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148362 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148399 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148494 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148562 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148591 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148835 00:32:15.460 Removing: /var/run/dpdk/spdk_pid148965 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149053 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149103 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149136 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149210 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149628 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149666 00:32:15.460 Removing: /var/run/dpdk/spdk_pid149969 00:32:15.719 Removing: /var/run/dpdk/spdk_pid150070 00:32:15.719 Removing: /var/run/dpdk/spdk_pid150166 00:32:15.719 Removing: /var/run/dpdk/spdk_pid150212 00:32:15.719 Removing: /var/run/dpdk/spdk_pid150249 00:32:15.719 Removing: /var/run/dpdk/spdk_pid150280 00:32:15.719 Removing: /var/run/dpdk/spdk_pid151618 00:32:15.719 Removing: /var/run/dpdk/spdk_pid151748 00:32:15.719 Removing: /var/run/dpdk/spdk_pid151753 00:32:15.719 Removing: /var/run/dpdk/spdk_pid151770 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152257 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152352 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152495 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152533 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152571 00:32:15.719 Removing: /var/run/dpdk/spdk_pid152842 00:32:15.719 Removing: /var/run/dpdk/spdk_pid153025 00:32:15.719 Removing: /var/run/dpdk/spdk_pid153123 00:32:15.719 Removing: /var/run/dpdk/spdk_pid153217 00:32:15.719 Removing: /var/run/dpdk/spdk_pid153255 00:32:15.719 Removing: /var/run/dpdk/spdk_pid153294 00:32:15.719 Clean 00:32:15.719 killing process with pid 105329 00:32:15.719 killing process with pid 105330 00:32:15.719 05:14:45 -- common/autotest_common.sh@1436 -- # return 0 00:32:15.719 05:14:45 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:15.719 05:14:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.719 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:32:15.979 05:14:45 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:15.979 05:14:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.979 05:14:45 -- common/autotest_common.sh@10 -- # set +x 00:32:15.979 05:14:45 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:15.979 05:14:45 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:15.979 05:14:45 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:15.979 05:14:45 -- spdk/autotest.sh@394 -- # hash lcov 00:32:15.979 05:14:45 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:15.979 05:14:45 -- spdk/autotest.sh@396 -- # hostname 00:32:15.979 05:14:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:16.238 geninfo: WARNING: invalid characters removed from testname! 00:32:54.949 05:15:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.212 05:15:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.496 05:15:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:06.023 05:15:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.369 05:15:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.899 05:15:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:15.185 05:15:44 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:15.185 05:15:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:15.185 05:15:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:15.185 05:15:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.185 05:15:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.185 05:15:44 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:15.185 05:15:44 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:15.185 05:15:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:15.185 05:15:44 -- paths/export.sh@5 -- $ export PATH 00:33:15.185 05:15:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:15.185 05:15:44 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:15.185 05:15:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:15.185 05:15:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714194944.XXXXXX 00:33:15.185 05:15:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714194944.SaYEFg 00:33:15.185 05:15:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:15.185 05:15:44 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:33:15.185 05:15:44 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:15.185 05:15:44 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:15.185 05:15:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:15.185 05:15:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:15.185 05:15:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:15.185 05:15:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:15.185 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:33:15.185 05:15:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:33:15.185 05:15:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:15.185 05:15:44 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:15.185 05:15:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:15.185 05:15:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:15.185 05:15:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:15.185 05:15:44 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:15.185 05:15:44 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:33:15.185 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:33:15.185 05:15:44 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:15.185 05:15:44 -- spdk/autopackage.sh@36 -- $ [[ -n v23.11 ]] 00:33:15.185 05:15:44 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:33:15.186 05:15:44 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:33:15.186 05:15:44 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:15.186 05:15:44 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:15.186 05:15:44 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:33:15.186 05:15:44 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:33:15.186 05:15:44 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:15.186 05:15:44 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:15.186 05:15:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:15.186 05:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:33:15.186 05:15:44 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:33:15.186 05:15:44 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:33:15.186 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:33:15.186 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:33:15.186 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:33:15.186 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:15.444 Using 'verbs' RDMA provider 00:33:28.214 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:38.188 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:38.188 Creating mk/config.mk...done. 00:33:38.188 Creating mk/cc.flags.mk...done. 00:33:38.188 Type 'make' to build. 00:33:38.188 05:16:07 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:38.188 make[1]: Nothing to be done for 'all'. 00:33:38.188 CC lib/log/log.o 00:33:38.188 CC lib/ut_mock/mock.o 00:33:38.188 CC lib/log/log_flags.o 00:33:38.188 CC lib/log/log_deprecated.o 00:33:38.188 CC lib/ut/ut.o 00:33:38.446 LIB libspdk_ut_mock.a 00:33:38.446 LIB libspdk_log.a 00:33:38.446 LIB libspdk_ut.a 00:33:38.446 CC lib/util/base64.o 00:33:38.446 CC lib/util/bit_array.o 00:33:38.446 CC lib/util/crc16.o 00:33:38.446 CC lib/util/cpuset.o 00:33:38.446 CC lib/ioat/ioat.o 00:33:38.446 CC lib/util/crc32.o 00:33:38.446 CC lib/util/crc32c.o 00:33:38.446 CC lib/dma/dma.o 00:33:38.446 CXX lib/trace_parser/trace.o 00:33:38.446 CC lib/vfio_user/host/vfio_user_pci.o 00:33:38.704 CC lib/vfio_user/host/vfio_user.o 00:33:38.704 CC lib/util/crc32_ieee.o 00:33:38.704 CC lib/util/crc64.o 00:33:38.704 LIB libspdk_dma.a 00:33:38.704 CC lib/util/dif.o 00:33:38.704 CC lib/util/fd.o 00:33:38.704 CC lib/util/file.o 00:33:38.704 CC lib/util/hexlify.o 00:33:38.704 LIB libspdk_ioat.a 00:33:38.704 CC lib/util/iov.o 00:33:38.704 CC lib/util/math.o 00:33:38.704 CC lib/util/pipe.o 00:33:38.704 CC lib/util/strerror_tls.o 00:33:38.704 CC lib/util/string.o 00:33:38.704 CC lib/util/uuid.o 00:33:38.704 LIB libspdk_vfio_user.a 00:33:38.963 CC lib/util/fd_group.o 00:33:38.963 CC lib/util/xor.o 00:33:38.963 CC lib/util/zipf.o 00:33:38.963 LIB libspdk_util.a 00:33:39.222 LIB libspdk_trace_parser.a 00:33:39.222 CC lib/rdma/common.o 00:33:39.222 CC lib/rdma/rdma_verbs.o 00:33:39.222 CC lib/env_dpdk/env.o 00:33:39.222 CC lib/idxd/idxd.o 00:33:39.222 CC lib/json/json_parse.o 00:33:39.222 CC lib/env_dpdk/memory.o 00:33:39.222 CC lib/conf/conf.o 00:33:39.222 CC lib/json/json_util.o 00:33:39.222 CC lib/idxd/idxd_user.o 00:33:39.222 CC lib/vmd/vmd.o 00:33:39.222 LIB libspdk_conf.a 00:33:39.481 CC lib/vmd/led.o 00:33:39.481 CC lib/json/json_write.o 00:33:39.481 CC lib/env_dpdk/pci.o 00:33:39.481 CC lib/env_dpdk/init.o 00:33:39.481 CC lib/env_dpdk/threads.o 00:33:39.481 LIB libspdk_rdma.a 00:33:39.481 CC lib/env_dpdk/pci_ioat.o 00:33:39.481 LIB libspdk_idxd.a 00:33:39.481 CC lib/env_dpdk/pci_virtio.o 00:33:39.481 CC lib/env_dpdk/pci_vmd.o 00:33:39.481 LIB libspdk_vmd.a 00:33:39.481 CC lib/env_dpdk/pci_idxd.o 00:33:39.481 CC lib/env_dpdk/pci_event.o 00:33:39.481 CC lib/env_dpdk/sigbus_handler.o 00:33:39.481 LIB libspdk_json.a 00:33:39.481 CC lib/env_dpdk/pci_dpdk.o 00:33:39.481 CC lib/env_dpdk/pci_dpdk_2207.o 00:33:39.740 CC lib/env_dpdk/pci_dpdk_2211.o 00:33:39.740 CC lib/jsonrpc/jsonrpc_server.o 00:33:39.740 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:33:39.740 CC lib/jsonrpc/jsonrpc_client.o 00:33:39.741 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:33:39.741 LIB libspdk_jsonrpc.a 00:33:40.000 CC lib/rpc/rpc.o 00:33:40.000 LIB libspdk_env_dpdk.a 00:33:40.000 LIB libspdk_rpc.a 00:33:40.259 CC lib/trace/trace.o 00:33:40.259 CC lib/notify/notify.o 00:33:40.259 CC lib/trace/trace_rpc.o 00:33:40.259 CC lib/notify/notify_rpc.o 00:33:40.259 CC lib/trace/trace_flags.o 00:33:40.259 CC lib/sock/sock_rpc.o 00:33:40.259 CC lib/sock/sock.o 00:33:40.259 LIB libspdk_notify.a 00:33:40.259 LIB libspdk_trace.a 00:33:40.519 LIB libspdk_sock.a 00:33:40.519 CC lib/thread/thread.o 00:33:40.519 CC lib/thread/iobuf.o 00:33:40.519 CC lib/nvme/nvme_ctrlr_cmd.o 00:33:40.519 CC lib/nvme/nvme_ctrlr.o 00:33:40.519 CC lib/nvme/nvme_fabric.o 00:33:40.519 CC lib/nvme/nvme_ns_cmd.o 00:33:40.519 CC lib/nvme/nvme_ns.o 00:33:40.519 CC lib/nvme/nvme_pcie_common.o 00:33:40.519 CC lib/nvme/nvme_qpair.o 00:33:40.519 CC lib/nvme/nvme_pcie.o 00:33:40.778 CC lib/nvme/nvme.o 00:33:41.035 LIB libspdk_thread.a 00:33:41.035 CC lib/nvme/nvme_quirks.o 00:33:41.035 CC lib/nvme/nvme_transport.o 00:33:41.035 CC lib/nvme/nvme_discovery.o 00:33:41.292 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:33:41.292 CC lib/blob/blobstore.o 00:33:41.292 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:33:41.292 CC lib/accel/accel.o 00:33:41.292 CC lib/accel/accel_rpc.o 00:33:41.292 CC lib/accel/accel_sw.o 00:33:41.292 CC lib/nvme/nvme_tcp.o 00:33:41.550 CC lib/nvme/nvme_opal.o 00:33:41.550 CC lib/blob/request.o 00:33:41.550 CC lib/blob/zeroes.o 00:33:41.550 CC lib/blob/blob_bs_dev.o 00:33:41.550 CC lib/nvme/nvme_io_msg.o 00:33:41.808 CC lib/nvme/nvme_poll_group.o 00:33:41.808 LIB libspdk_accel.a 00:33:41.808 CC lib/init/json_config.o 00:33:41.808 CC lib/nvme/nvme_zns.o 00:33:41.808 CC lib/nvme/nvme_cuse.o 00:33:41.808 CC lib/nvme/nvme_vfio_user.o 00:33:41.808 CC lib/virtio/virtio.o 00:33:41.808 CC lib/virtio/virtio_vhost_user.o 00:33:41.808 CC lib/init/subsystem.o 00:33:42.067 CC lib/virtio/virtio_vfio_user.o 00:33:42.067 CC lib/init/subsystem_rpc.o 00:33:42.067 CC lib/bdev/bdev.o 00:33:42.067 CC lib/nvme/nvme_rdma.o 00:33:42.067 CC lib/init/rpc.o 00:33:42.067 CC lib/virtio/virtio_pci.o 00:33:42.067 CC lib/bdev/bdev_rpc.o 00:33:42.326 CC lib/bdev/bdev_zone.o 00:33:42.326 CC lib/bdev/part.o 00:33:42.326 CC lib/bdev/scsi_nvme.o 00:33:42.326 LIB libspdk_init.a 00:33:42.326 LIB libspdk_virtio.a 00:33:42.326 CC lib/event/app.o 00:33:42.326 CC lib/event/reactor.o 00:33:42.326 CC lib/event/log_rpc.o 00:33:42.326 CC lib/event/app_rpc.o 00:33:42.326 CC lib/event/scheduler_static.o 00:33:42.326 LIB libspdk_blob.a 00:33:42.585 CC lib/lvol/lvol.o 00:33:42.585 CC lib/blobfs/blobfs.o 00:33:42.585 CC lib/blobfs/tree.o 00:33:42.585 LIB libspdk_event.a 00:33:42.844 LIB libspdk_blobfs.a 00:33:42.844 LIB libspdk_lvol.a 00:33:42.844 LIB libspdk_nvme.a 00:33:43.103 LIB libspdk_bdev.a 00:33:43.363 CC lib/nvmf/ctrlr.o 00:33:43.363 CC lib/nvmf/ctrlr_discovery.o 00:33:43.363 CC lib/nvmf/ctrlr_bdev.o 00:33:43.363 CC lib/nvmf/subsystem.o 00:33:43.363 CC lib/scsi/lun.o 00:33:43.363 CC lib/nvmf/nvmf.o 00:33:43.363 CC lib/scsi/port.o 00:33:43.363 CC lib/scsi/dev.o 00:33:43.363 CC lib/nbd/nbd.o 00:33:43.363 CC lib/ftl/ftl_core.o 00:33:43.363 CC lib/scsi/scsi.o 00:33:43.363 CC lib/nvmf/nvmf_rpc.o 00:33:43.363 CC lib/nvmf/transport.o 00:33:43.363 CC lib/nvmf/tcp.o 00:33:43.622 CC lib/nbd/nbd_rpc.o 00:33:43.622 CC lib/ftl/ftl_init.o 00:33:43.622 CC lib/ftl/ftl_layout.o 00:33:43.622 CC lib/scsi/scsi_bdev.o 00:33:43.622 CC lib/scsi/scsi_pr.o 00:33:43.622 LIB libspdk_nbd.a 00:33:43.622 CC lib/scsi/scsi_rpc.o 00:33:43.622 CC lib/scsi/task.o 00:33:43.622 CC lib/nvmf/rdma.o 00:33:43.881 CC lib/ftl/ftl_debug.o 00:33:43.881 CC lib/ftl/ftl_io.o 00:33:43.881 CC lib/ftl/ftl_sb.o 00:33:43.881 CC lib/ftl/ftl_l2p.o 00:33:43.881 CC lib/ftl/ftl_l2p_flat.o 00:33:43.881 CC lib/ftl/ftl_nv_cache.o 00:33:43.881 LIB libspdk_scsi.a 00:33:43.881 CC lib/ftl/ftl_band.o 00:33:43.881 CC lib/ftl/ftl_band_ops.o 00:33:44.141 CC lib/ftl/ftl_writer.o 00:33:44.141 CC lib/ftl/ftl_rq.o 00:33:44.141 CC lib/ftl/ftl_reloc.o 00:33:44.141 CC lib/iscsi/conn.o 00:33:44.141 CC lib/vhost/vhost.o 00:33:44.141 CC lib/vhost/vhost_rpc.o 00:33:44.141 CC lib/iscsi/init_grp.o 00:33:44.141 CC lib/vhost/vhost_scsi.o 00:33:44.141 CC lib/vhost/vhost_blk.o 00:33:44.141 CC lib/vhost/rte_vhost_user.o 00:33:44.141 CC lib/ftl/ftl_l2p_cache.o 00:33:44.401 CC lib/ftl/ftl_p2l.o 00:33:44.401 CC lib/iscsi/iscsi.o 00:33:44.401 CC lib/ftl/mngt/ftl_mngt.o 00:33:44.660 LIB libspdk_nvmf.a 00:33:44.660 CC lib/iscsi/md5.o 00:33:44.660 CC lib/iscsi/param.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_startup.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_md.o 00:33:44.660 CC lib/iscsi/portal_grp.o 00:33:44.660 CC lib/iscsi/tgt_node.o 00:33:44.660 CC lib/iscsi/iscsi_subsystem.o 00:33:44.660 CC lib/iscsi/iscsi_rpc.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_misc.o 00:33:44.660 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_band.o 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:33:44.919 LIB libspdk_vhost.a 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:33:44.919 CC lib/iscsi/task.o 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:33:44.919 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:33:44.919 CC lib/ftl/utils/ftl_conf.o 00:33:44.919 CC lib/ftl/utils/ftl_md.o 00:33:44.919 CC lib/ftl/utils/ftl_mempool.o 00:33:44.919 CC lib/ftl/utils/ftl_bitmap.o 00:33:45.178 CC lib/ftl/utils/ftl_property.o 00:33:45.179 LIB libspdk_iscsi.a 00:33:45.179 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:33:45.179 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:33:45.179 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:33:45.179 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:33:45.179 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:33:45.179 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:33:45.179 CC lib/ftl/upgrade/ftl_sb_v3.o 00:33:45.179 CC lib/ftl/upgrade/ftl_sb_v5.o 00:33:45.179 CC lib/ftl/nvc/ftl_nvc_dev.o 00:33:45.179 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:33:45.179 CC lib/ftl/base/ftl_base_dev.o 00:33:45.179 CC lib/ftl/base/ftl_base_bdev.o 00:33:45.440 LIB libspdk_ftl.a 00:33:45.738 CC module/env_dpdk/env_dpdk_rpc.o 00:33:45.738 CC module/scheduler/gscheduler/gscheduler.o 00:33:45.738 CC module/sock/posix/posix.o 00:33:45.738 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:33:45.738 CC module/accel/error/accel_error.o 00:33:45.738 CC module/blob/bdev/blob_bdev.o 00:33:45.738 CC module/accel/ioat/accel_ioat.o 00:33:45.738 CC module/scheduler/dynamic/scheduler_dynamic.o 00:33:45.738 CC module/accel/dsa/accel_dsa.o 00:33:45.738 CC module/accel/iaa/accel_iaa.o 00:33:45.738 LIB libspdk_env_dpdk_rpc.a 00:33:45.738 CC module/accel/iaa/accel_iaa_rpc.o 00:33:45.738 LIB libspdk_scheduler_gscheduler.a 00:33:45.738 LIB libspdk_scheduler_dpdk_governor.a 00:33:45.738 CC module/accel/error/accel_error_rpc.o 00:33:45.738 CC module/accel/dsa/accel_dsa_rpc.o 00:33:45.738 CC module/accel/ioat/accel_ioat_rpc.o 00:33:45.738 LIB libspdk_scheduler_dynamic.a 00:33:46.002 LIB libspdk_blob_bdev.a 00:33:46.002 LIB libspdk_accel_iaa.a 00:33:46.002 LIB libspdk_accel_ioat.a 00:33:46.002 LIB libspdk_accel_dsa.a 00:33:46.002 LIB libspdk_accel_error.a 00:33:46.002 CC module/bdev/gpt/gpt.o 00:33:46.002 CC module/bdev/delay/vbdev_delay.o 00:33:46.002 CC module/bdev/lvol/vbdev_lvol.o 00:33:46.002 CC module/bdev/error/vbdev_error.o 00:33:46.002 CC module/bdev/malloc/bdev_malloc.o 00:33:46.002 CC module/blobfs/bdev/blobfs_bdev.o 00:33:46.002 CC module/bdev/null/bdev_null.o 00:33:46.002 CC module/bdev/nvme/bdev_nvme.o 00:33:46.002 CC module/bdev/passthru/vbdev_passthru.o 00:33:46.002 LIB libspdk_sock_posix.a 00:33:46.261 CC module/bdev/nvme/bdev_nvme_rpc.o 00:33:46.261 CC module/bdev/gpt/vbdev_gpt.o 00:33:46.261 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:33:46.261 CC module/bdev/error/vbdev_error_rpc.o 00:33:46.261 CC module/bdev/null/bdev_null_rpc.o 00:33:46.261 CC module/bdev/malloc/bdev_malloc_rpc.o 00:33:46.261 CC module/bdev/delay/vbdev_delay_rpc.o 00:33:46.261 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:33:46.261 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:33:46.261 LIB libspdk_blobfs_bdev.a 00:33:46.261 LIB libspdk_bdev_error.a 00:33:46.261 LIB libspdk_bdev_gpt.a 00:33:46.261 CC module/bdev/nvme/nvme_rpc.o 00:33:46.261 CC module/bdev/nvme/bdev_mdns_client.o 00:33:46.261 LIB libspdk_bdev_null.a 00:33:46.261 LIB libspdk_bdev_malloc.a 00:33:46.261 LIB libspdk_bdev_delay.a 00:33:46.261 LIB libspdk_bdev_passthru.a 00:33:46.520 CC module/bdev/raid/bdev_raid.o 00:33:46.520 CC module/bdev/zone_block/vbdev_zone_block.o 00:33:46.520 CC module/bdev/aio/bdev_aio.o 00:33:46.520 CC module/bdev/split/vbdev_split.o 00:33:46.520 CC module/bdev/ftl/bdev_ftl.o 00:33:46.520 CC module/bdev/split/vbdev_split_rpc.o 00:33:46.520 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:33:46.520 CC module/bdev/aio/bdev_aio_rpc.o 00:33:46.520 LIB libspdk_bdev_lvol.a 00:33:46.520 CC module/bdev/iscsi/bdev_iscsi.o 00:33:46.520 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:33:46.520 LIB libspdk_bdev_split.a 00:33:46.779 CC module/bdev/nvme/vbdev_opal.o 00:33:46.779 CC module/bdev/ftl/bdev_ftl_rpc.o 00:33:46.779 LIB libspdk_bdev_aio.a 00:33:46.779 LIB libspdk_bdev_zone_block.a 00:33:46.779 CC module/bdev/virtio/bdev_virtio_scsi.o 00:33:46.779 CC module/bdev/virtio/bdev_virtio_blk.o 00:33:46.779 CC module/bdev/virtio/bdev_virtio_rpc.o 00:33:46.779 CC module/bdev/nvme/vbdev_opal_rpc.o 00:33:46.779 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:33:46.779 CC module/bdev/raid/bdev_raid_rpc.o 00:33:46.779 LIB libspdk_bdev_ftl.a 00:33:46.779 CC module/bdev/raid/bdev_raid_sb.o 00:33:46.779 LIB libspdk_bdev_iscsi.a 00:33:46.779 CC module/bdev/raid/raid0.o 00:33:46.779 CC module/bdev/raid/raid1.o 00:33:46.779 CC module/bdev/raid/concat.o 00:33:46.779 CC module/bdev/raid/raid5f.o 00:33:47.036 LIB libspdk_bdev_virtio.a 00:33:47.036 LIB libspdk_bdev_nvme.a 00:33:47.295 LIB libspdk_bdev_raid.a 00:33:47.295 CC module/event/subsystems/sock/sock.o 00:33:47.295 CC module/event/subsystems/iobuf/iobuf.o 00:33:47.295 CC module/event/subsystems/vmd/vmd.o 00:33:47.295 CC module/event/subsystems/vmd/vmd_rpc.o 00:33:47.295 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:33:47.295 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:33:47.295 CC module/event/subsystems/scheduler/scheduler.o 00:33:47.554 LIB libspdk_event_sock.a 00:33:47.554 LIB libspdk_event_vmd.a 00:33:47.554 LIB libspdk_event_vhost_blk.a 00:33:47.554 LIB libspdk_event_scheduler.a 00:33:47.554 LIB libspdk_event_iobuf.a 00:33:47.554 CC module/event/subsystems/accel/accel.o 00:33:47.813 LIB libspdk_event_accel.a 00:33:47.813 CC module/event/subsystems/bdev/bdev.o 00:33:48.072 LIB libspdk_event_bdev.a 00:33:48.072 CC module/event/subsystems/nbd/nbd.o 00:33:48.072 CC module/event/subsystems/scsi/scsi.o 00:33:48.072 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:33:48.072 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:33:48.331 LIB libspdk_event_scsi.a 00:33:48.331 LIB libspdk_event_nbd.a 00:33:48.331 LIB libspdk_event_nvmf.a 00:33:48.331 CC module/event/subsystems/iscsi/iscsi.o 00:33:48.331 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:33:48.331 LIB libspdk_event_vhost_scsi.a 00:33:48.590 LIB libspdk_event_iscsi.a 00:33:48.590 CXX app/trace/trace.o 00:33:48.590 TEST_HEADER include/spdk/config.h 00:33:48.590 CXX test/cpp_headers/accel.o 00:33:48.590 CC test/event/event_perf/event_perf.o 00:33:48.590 CC examples/accel/perf/accel_perf.o 00:33:48.590 CC test/bdev/bdevio/bdevio.o 00:33:48.590 CC test/env/mem_callbacks/mem_callbacks.o 00:33:48.590 CC test/blobfs/mkfs/mkfs.o 00:33:48.590 CC test/dma/test_dma/test_dma.o 00:33:48.590 CC test/accel/dif/dif.o 00:33:48.590 CC test/app/bdev_svc/bdev_svc.o 00:33:48.849 LINK event_perf 00:33:48.849 CXX test/cpp_headers/accel_module.o 00:33:48.849 LINK bdev_svc 00:33:48.849 LINK mkfs 00:33:48.849 LINK accel_perf 00:33:48.849 CXX test/cpp_headers/assert.o 00:33:49.107 LINK dif 00:33:49.107 LINK bdevio 00:33:49.107 LINK spdk_trace 00:33:49.107 LINK test_dma 00:33:49.107 CXX test/cpp_headers/barrier.o 00:33:49.107 LINK mem_callbacks 00:33:49.364 CXX test/cpp_headers/base64.o 00:33:49.623 CXX test/cpp_headers/bdev.o 00:33:50.190 CXX test/cpp_headers/bdev_module.o 00:33:50.757 CXX test/cpp_headers/bdev_zone.o 00:33:51.325 CC test/env/vtophys/vtophys.o 00:33:51.325 CXX test/cpp_headers/bit_array.o 00:33:51.584 LINK vtophys 00:33:51.842 CXX test/cpp_headers/bit_pool.o 00:33:52.409 CXX test/cpp_headers/blob.o 00:33:52.669 CXX test/cpp_headers/blob_bdev.o 00:33:53.236 CXX test/cpp_headers/blobfs.o 00:33:53.495 CXX test/cpp_headers/blobfs_bdev.o 00:33:53.753 CXX test/cpp_headers/conf.o 00:33:54.320 CXX test/cpp_headers/config.o 00:33:54.320 CXX test/cpp_headers/cpuset.o 00:33:54.578 CXX test/cpp_headers/crc16.o 00:33:55.145 CXX test/cpp_headers/crc32.o 00:33:56.079 CXX test/cpp_headers/crc64.o 00:33:56.647 CC app/trace_record/trace_record.o 00:33:56.647 CXX test/cpp_headers/dif.o 00:33:56.906 CC test/event/reactor/reactor.o 00:33:57.840 CXX test/cpp_headers/dma.o 00:33:57.840 LINK reactor 00:33:57.840 LINK spdk_trace_record 00:33:58.775 CXX test/cpp_headers/endian.o 00:33:59.710 CXX test/cpp_headers/env.o 00:34:00.645 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:34:00.904 CXX test/cpp_headers/env_dpdk.o 00:34:01.839 LINK env_dpdk_post_init 00:34:01.839 CXX test/cpp_headers/event.o 00:34:03.215 CXX test/cpp_headers/fd.o 00:34:04.149 CXX test/cpp_headers/fd_group.o 00:34:05.107 CXX test/cpp_headers/file.o 00:34:06.500 CXX test/cpp_headers/ftl.o 00:34:06.500 CC examples/bdev/hello_world/hello_bdev.o 00:34:07.879 CXX test/cpp_headers/gpt_spec.o 00:34:08.138 LINK hello_bdev 00:34:08.705 CC app/nvmf_tgt/nvmf_main.o 00:34:09.273 CXX test/cpp_headers/hexlify.o 00:34:09.843 LINK nvmf_tgt 00:34:10.410 CXX test/cpp_headers/histogram_data.o 00:34:11.347 CXX test/cpp_headers/idxd.o 00:34:12.725 CXX test/cpp_headers/idxd_spec.o 00:34:13.662 CXX test/cpp_headers/init.o 00:34:15.037 CXX test/cpp_headers/ioat.o 00:34:15.971 CXX test/cpp_headers/ioat_spec.o 00:34:16.904 CXX test/cpp_headers/iscsi_spec.o 00:34:18.279 CXX test/cpp_headers/json.o 00:34:18.845 CC test/event/reactor_perf/reactor_perf.o 00:34:19.104 CXX test/cpp_headers/jsonrpc.o 00:34:19.672 LINK reactor_perf 00:34:20.239 CXX test/cpp_headers/likely.o 00:34:20.806 CXX test/cpp_headers/log.o 00:34:22.182 CXX test/cpp_headers/lvol.o 00:34:23.117 CXX test/cpp_headers/memory.o 00:34:24.492 CXX test/cpp_headers/mmio.o 00:34:25.866 CXX test/cpp_headers/nbd.o 00:34:25.866 CXX test/cpp_headers/notify.o 00:34:27.242 CXX test/cpp_headers/nvme.o 00:34:28.616 CXX test/cpp_headers/nvme_intel.o 00:34:29.550 CXX test/cpp_headers/nvme_ocssd.o 00:34:30.927 CXX test/cpp_headers/nvme_ocssd_spec.o 00:34:32.303 CXX test/cpp_headers/nvme_spec.o 00:34:33.677 CXX test/cpp_headers/nvme_zns.o 00:34:35.053 CXX test/cpp_headers/nvmf.o 00:34:36.428 CC test/env/memory/memory_ut.o 00:34:36.428 CXX test/cpp_headers/nvmf_cmd.o 00:34:37.828 CXX test/cpp_headers/nvmf_fc_spec.o 00:34:39.203 CXX test/cpp_headers/nvmf_spec.o 00:34:40.581 CXX test/cpp_headers/nvmf_transport.o 00:34:40.839 LINK memory_ut 00:34:41.775 CXX test/cpp_headers/opal.o 00:34:43.678 CXX test/cpp_headers/opal_spec.o 00:34:43.678 CC test/event/app_repeat/app_repeat.o 00:34:44.615 CXX test/cpp_headers/pci_ids.o 00:34:44.874 LINK app_repeat 00:34:46.252 CXX test/cpp_headers/pipe.o 00:34:47.188 CXX test/cpp_headers/queue.o 00:34:47.188 CXX test/cpp_headers/reduce.o 00:34:48.566 CXX test/cpp_headers/rpc.o 00:34:49.943 CXX test/cpp_headers/scheduler.o 00:34:51.317 CXX test/cpp_headers/scsi.o 00:34:52.693 CC test/env/pci/pci_ut.o 00:34:52.693 CXX test/cpp_headers/scsi_spec.o 00:34:54.068 CXX test/cpp_headers/sock.o 00:34:54.632 LINK pci_ut 00:34:55.568 CXX test/cpp_headers/stdinc.o 00:34:56.945 CXX test/cpp_headers/string.o 00:34:58.318 CXX test/cpp_headers/thread.o 00:34:59.693 CXX test/cpp_headers/trace.o 00:35:01.068 CXX test/cpp_headers/trace_parser.o 00:35:02.443 CXX test/cpp_headers/tree.o 00:35:02.443 CXX test/cpp_headers/ublk.o 00:35:04.343 CXX test/cpp_headers/util.o 00:35:05.296 CXX test/cpp_headers/uuid.o 00:35:06.243 CXX test/cpp_headers/version.o 00:35:06.501 CXX test/cpp_headers/vfio_user_pci.o 00:35:07.879 CC test/event/scheduler/scheduler.o 00:35:07.879 CXX test/cpp_headers/vfio_user_spec.o 00:35:09.783 CXX test/cpp_headers/vhost.o 00:35:09.783 LINK scheduler 00:35:11.159 CXX test/cpp_headers/vmd.o 00:35:12.534 CXX test/cpp_headers/xor.o 00:35:13.471 CXX test/cpp_headers/zipf.o 00:35:16.003 CC examples/blob/hello_world/hello_blob.o 00:35:17.375 LINK hello_blob 00:35:32.247 CC examples/ioat/perf/perf.o 00:35:32.247 LINK ioat_perf 00:35:44.459 CC examples/ioat/verify/verify.o 00:35:44.459 CC app/iscsi_tgt/iscsi_tgt.o 00:35:44.717 LINK verify 00:35:44.717 CC examples/blob/cli/blobcli.o 00:35:45.283 LINK iscsi_tgt 00:35:45.850 CC app/spdk_tgt/spdk_tgt.o 00:35:46.108 LINK blobcli 00:35:46.675 LINK spdk_tgt 00:35:48.057 CC test/lvol/esnap/esnap.o 00:35:49.465 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:35:50.852 LINK nvme_fuzz 00:35:54.135 CC examples/nvme/hello_world/hello_world.o 00:35:54.394 LINK hello_world 00:35:58.580 LINK esnap 00:36:16.660 CC examples/bdev/bdevperf/bdevperf.o 00:36:18.036 LINK bdevperf 00:36:30.331 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:36:33.614 CC examples/nvme/reconnect/reconnect.o 00:36:34.550 LINK iscsi_fuzz 00:36:35.120 LINK reconnect 00:36:37.759 CC examples/nvme/nvme_manage/nvme_manage.o 00:36:39.666 LINK nvme_manage 00:36:40.235 CC examples/sock/hello_world/hello_sock.o 00:36:41.614 LINK hello_sock 00:37:13.701 CC app/spdk_lspci/spdk_lspci.o 00:37:13.702 LINK spdk_lspci 00:37:13.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:37:13.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:37:14.268 CC app/spdk_nvme_perf/perf.o 00:37:14.526 LINK vhost_fuzz 00:37:14.784 CC app/spdk_nvme_identify/identify.o 00:37:15.350 CC app/spdk_nvme_discover/discovery_aer.o 00:37:16.285 LINK spdk_nvme_discover 00:37:16.852 LINK spdk_nvme_perf 00:37:17.111 LINK spdk_nvme_identify 00:37:21.302 CC examples/nvme/arbitration/arbitration.o 00:37:23.836 LINK arbitration 00:37:41.932 CC test/app/histogram_perf/histogram_perf.o 00:37:41.932 LINK histogram_perf 00:37:43.310 CC app/spdk_top/spdk_top.o 00:37:46.599 LINK spdk_top 00:37:49.890 CC app/vhost/vhost.o 00:37:50.461 LINK vhost 00:37:52.404 CC app/spdk_dd/spdk_dd.o 00:37:53.348 LINK spdk_dd 00:37:55.258 CC test/app/jsoncat/jsoncat.o 00:37:55.258 CC app/fio/nvme/fio_plugin.o 00:37:55.827 LINK jsoncat 00:37:55.827 CC examples/nvme/hotplug/hotplug.o 00:37:56.396 LINK spdk_nvme 00:37:56.396 LINK hotplug 00:37:57.773 CC app/fio/bdev/fio_plugin.o 00:37:58.340 CC examples/vmd/lsvmd/lsvmd.o 00:37:58.598 LINK spdk_bdev 00:37:58.857 LINK lsvmd 00:37:59.115 CC examples/nvmf/nvmf/nvmf.o 00:38:00.493 LINK nvmf 00:38:07.062 CC test/app/stub/stub.o 00:38:07.321 LINK stub 00:38:17.305 CC examples/vmd/led/led.o 00:38:17.563 LINK led 00:38:25.685 CC examples/nvme/cmb_copy/cmb_copy.o 00:38:25.962 LINK cmb_copy 00:38:26.530 CC examples/util/zipf/zipf.o 00:38:27.466 LINK zipf 00:38:28.402 CC examples/thread/thread/thread_ex.o 00:38:29.782 LINK thread 00:38:35.125 CC examples/idxd/perf/perf.o 00:38:37.027 LINK idxd_perf 00:38:37.964 CC examples/nvme/abort/abort.o 00:38:39.870 LINK abort 00:38:49.861 CC examples/interrupt_tgt/interrupt_tgt.o 00:38:50.800 LINK interrupt_tgt 00:38:51.370 CC test/nvme/aer/aer.o 00:38:52.753 LINK aer 00:38:53.690 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:38:54.626 LINK pmr_persistence 00:38:54.626 CC test/nvme/reset/reset.o 00:38:56.005 LINK reset 00:39:22.550 CC test/nvme/sgl/sgl.o 00:39:24.452 LINK sgl 00:39:36.657 CC test/nvme/e2edp/nvme_dp.o 00:39:36.657 LINK nvme_dp 00:39:37.595 CC test/nvme/overhead/overhead.o 00:39:38.534 CC test/nvme/err_injection/err_injection.o 00:39:39.102 LINK overhead 00:39:39.692 LINK err_injection 00:39:57.816 CC test/nvme/startup/startup.o 00:39:57.816 LINK startup 00:39:58.384 CC test/nvme/reserve/reserve.o 00:39:59.320 CC test/nvme/simple_copy/simple_copy.o 00:39:59.320 LINK reserve 00:40:00.262 LINK simple_copy 00:40:06.833 CC test/nvme/connect_stress/connect_stress.o 00:40:06.833 LINK connect_stress 00:40:08.206 CC test/rpc_client/rpc_client_test.o 00:40:08.206 CC test/nvme/boot_partition/boot_partition.o 00:40:09.142 LINK rpc_client_test 00:40:09.142 LINK boot_partition 00:40:10.080 CC test/nvme/compliance/nvme_compliance.o 00:40:11.457 LINK nvme_compliance 00:40:12.835 CC test/thread/poller_perf/poller_perf.o 00:40:13.403 LINK poller_perf 00:40:23.379 CC test/nvme/fused_ordering/fused_ordering.o 00:40:23.379 LINK fused_ordering 00:40:24.315 CC test/nvme/doorbell_aers/doorbell_aers.o 00:40:24.883 CC test/nvme/fdp/fdp.o 00:40:24.883 LINK doorbell_aers 00:40:25.450 CC test/nvme/cuse/cuse.o 00:40:25.708 LINK fdp 00:40:26.276 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:40:26.276 CC test/unit/lib/accel/accel.c/accel_ut.o 00:40:26.844 LINK cuse 00:40:26.844 LINK histogram_ut 00:40:27.104 CC test/thread/lock/spdk_lock.o 00:40:29.006 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:40:29.265 LINK accel_ut 00:40:29.265 LINK spdk_lock 00:40:29.524 CC test/unit/lib/bdev/part.c/part_ut.o 00:40:32.087 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:40:32.347 LINK scsi_nvme_ut 00:40:33.284 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:40:34.658 LINK gpt_ut 00:40:35.225 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:40:36.159 LINK part_ut 00:40:37.534 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:40:38.101 LINK vbdev_lvol_ut 00:40:38.669 LINK bdev_ut 00:40:40.574 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:40:40.833 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:40:41.771 LINK blob_bdev_ut 00:40:41.771 CC test/unit/lib/blob/blob.c/blob_ut.o 00:40:42.030 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:40:42.289 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:40:42.548 LINK bdev_raid_sb_ut 00:40:42.548 LINK bdev_ut 00:40:42.807 LINK bdev_zone_ut 00:40:42.807 LINK bdev_raid_ut 00:40:42.808 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:40:43.375 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:40:43.634 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:40:43.893 LINK vbdev_zone_block_ut 00:40:44.829 LINK concat_ut 00:40:45.766 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:40:45.766 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:40:46.335 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:40:46.595 LINK raid1_ut 00:40:46.595 LINK tree_ut 00:40:47.164 LINK raid5f_ut 00:40:48.101 LINK bdev_nvme_ut 00:40:48.360 LINK blob_ut 00:40:48.619 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:40:48.878 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:40:49.446 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:40:49.706 LINK blobfs_async_ut 00:40:49.965 LINK blobfs_bdev_ut 00:40:50.223 CC test/unit/lib/dma/dma.c/dma_ut.o 00:40:50.223 CC test/unit/lib/event/app.c/app_ut.o 00:40:50.790 LINK blobfs_sync_ut 00:40:50.790 LINK dma_ut 00:40:51.049 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:40:51.049 LINK app_ut 00:40:52.426 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:40:52.426 LINK reactor_ut 00:40:52.685 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:40:53.253 LINK ioat_ut 00:40:55.155 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:40:55.155 LINK conn_ut 00:40:55.414 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:40:56.350 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:40:56.607 LINK json_util_ut 00:40:57.174 LINK jsonrpc_server_ut 00:40:57.740 LINK json_parse_ut 00:40:57.740 CC test/unit/lib/log/log.c/log_ut.o 00:40:57.740 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:40:58.007 LINK log_ut 00:40:58.297 LINK init_grp_ut 00:40:58.297 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:40:58.569 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:40:59.136 CC test/unit/lib/notify/notify.c/notify_ut.o 00:40:59.395 LINK lvol_ut 00:40:59.395 LINK iscsi_ut 00:40:59.395 LINK notify_ut 00:40:59.395 CC test/unit/lib/iscsi/param.c/param_ut.o 00:40:59.653 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:40:59.653 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:40:59.653 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:40:59.912 CC test/unit/lib/sock/sock.c/sock_ut.o 00:40:59.912 LINK param_ut 00:41:00.171 CC test/unit/lib/sock/posix.c/posix_ut.o 00:41:00.171 LINK dev_ut 00:41:01.107 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:41:01.108 LINK nvme_ut 00:41:01.676 LINK posix_ut 00:41:01.676 LINK sock_ut 00:41:01.935 LINK portal_grp_ut 00:41:02.870 LINK tcp_ut 00:41:03.436 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:41:04.008 CC test/unit/lib/thread/thread.c/thread_ut.o 00:41:04.265 LINK tgt_node_ut 00:41:04.524 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:41:04.782 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:41:05.041 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:41:05.300 LINK iobuf_ut 00:41:05.868 CC test/unit/lib/util/base64.c/base64_ut.o 00:41:05.868 LINK lun_ut 00:41:05.868 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:41:05.868 LINK thread_ut 00:41:06.128 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:41:06.128 LINK base64_ut 00:41:06.387 LINK bit_array_ut 00:41:06.646 LINK pci_event_ut 00:41:07.215 LINK nvme_ctrlr_ut 00:41:07.475 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:41:07.734 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:41:07.993 LINK subsystem_ut 00:41:07.993 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:41:07.993 LINK cpuset_ut 00:41:08.253 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:41:08.253 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:41:08.253 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:41:08.512 LINK crc32_ieee_ut 00:41:08.512 LINK crc16_ut 00:41:08.512 LINK rpc_ut 00:41:08.772 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:41:08.772 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:41:09.031 LINK crc32c_ut 00:41:09.031 LINK scsi_ut 00:41:09.031 LINK ctrlr_ut 00:41:09.031 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:41:09.291 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:41:09.291 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:41:09.291 LINK crc64_ut 00:41:09.551 LINK idxd_user_ut 00:41:09.551 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:41:09.551 CC test/unit/lib/util/dif.c/dif_ut.o 00:41:09.810 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:41:09.810 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:41:09.810 CC test/unit/lib/util/iov.c/iov_ut.o 00:41:10.070 CC test/unit/lib/rdma/common.c/common_ut.o 00:41:10.329 LINK idxd_ut 00:41:10.329 LINK iov_ut 00:41:10.588 LINK dif_ut 00:41:10.589 LINK common_ut 00:41:10.589 LINK vhost_ut 00:41:10.589 LINK scsi_bdev_ut 00:41:10.848 LINK nvme_ctrlr_cmd_ut 00:41:11.786 CC test/unit/lib/util/math.c/math_ut.o 00:41:12.045 LINK math_ut 00:41:12.304 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:41:12.563 CC test/unit/lib/util/string.c/string_ut.o 00:41:12.563 CC test/unit/lib/util/xor.c/xor_ut.o 00:41:12.563 LINK pipe_ut 00:41:12.563 LINK string_ut 00:41:12.563 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:41:12.823 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:41:13.082 LINK xor_ut 00:41:13.082 LINK ftl_l2p_ut 00:41:13.649 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:41:13.649 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:41:13.649 LINK subsystem_ut 00:41:13.649 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:41:13.649 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:41:13.649 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:41:13.907 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:41:13.907 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:41:13.907 LINK scsi_pr_ut 00:41:13.907 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:41:13.907 LINK ftl_bitmap_ut 00:41:14.167 LINK ftl_io_ut 00:41:14.167 LINK ftl_mempool_ut 00:41:14.167 LINK ftl_band_ut 00:41:14.167 LINK ftl_mngt_ut 00:41:14.426 LINK nvme_ctrlr_ocssd_cmd_ut 00:41:14.994 LINK ftl_sb_ut 00:41:15.252 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:41:15.820 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:41:15.820 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:41:15.820 LINK ftl_layout_upgrade_ut 00:41:16.078 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:41:16.336 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:41:16.336 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:41:16.336 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:41:16.336 LINK nvme_ns_ut 00:41:16.336 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:41:16.595 LINK ctrlr_bdev_ut 00:41:16.855 LINK ctrlr_discovery_ut 00:41:16.855 LINK nvme_ns_cmd_ut 00:41:17.114 LINK nvme_ns_ocssd_cmd_ut 00:41:17.114 LINK nvme_poll_group_ut 00:41:17.683 LINK nvme_pcie_ut 00:41:17.683 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:41:18.250 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:41:18.250 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:41:18.509 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:41:18.509 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:41:18.768 LINK nvme_quirks_ut 00:41:18.768 LINK nvme_qpair_ut 00:41:18.768 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:41:18.768 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:41:18.768 LINK nvmf_ut 00:41:19.335 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:41:19.594 LINK nvme_io_msg_ut 00:41:19.594 LINK nvme_transport_ut 00:41:19.594 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:41:19.594 LINK rdma_ut 00:41:19.852 LINK nvme_tcp_ut 00:41:20.111 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:41:20.370 LINK transport_ut 00:41:20.938 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:41:21.198 LINK nvme_pcie_common_ut 00:41:21.198 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:41:21.457 LINK nvme_fabric_ut 00:41:21.716 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:41:21.716 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:41:21.988 LINK nvme_opal_ut 00:41:22.938 LINK json_write_ut 00:41:23.872 LINK nvme_rdma_ut 00:41:24.131 LINK nvme_cuse_ut 00:42:20.380 json_parse_ut.c: In function ‘test_parse_nesting’: 00:42:20.380 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:42:20.380 616 | test_parse_nesting(void) 00:42:20.380 | ^ 00:42:20.380 05:24:46 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:42:20.380 make[1]: Nothing to be done for 'clean'. 00:42:20.639 05:24:50 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:42:20.639 05:24:50 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:42:20.639 05:24:50 -- common/autotest_common.sh@10 -- $ set +x 00:42:20.639 05:24:50 -- spdk/autopackage.sh@48 -- $ timing_finish 00:42:20.639 05:24:50 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:20.639 05:24:50 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:20.639 05:24:50 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:20.898 + [[ -n 3292 ]] 00:42:20.898 + sudo kill 3292 00:42:20.908 [Pipeline] } 00:42:20.930 [Pipeline] // timeout 00:42:20.937 [Pipeline] } 00:42:20.958 [Pipeline] // stage 00:42:20.965 [Pipeline] } 00:42:20.984 [Pipeline] // catchError 00:42:20.995 [Pipeline] stage 00:42:20.998 [Pipeline] { (Stop VM) 00:42:21.015 [Pipeline] sh 00:42:21.300 + vagrant halt 00:42:23.832 ==> default: Halting domain... 00:42:33.822 [Pipeline] sh 00:42:34.101 + vagrant destroy -f 00:42:36.636 ==> default: Removing domain... 00:42:37.215 [Pipeline] sh 00:42:37.493 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:42:37.502 [Pipeline] } 00:42:37.518 [Pipeline] // stage 00:42:37.523 [Pipeline] } 00:42:37.539 [Pipeline] // dir 00:42:37.544 [Pipeline] } 00:42:37.555 [Pipeline] // wrap 00:42:37.561 [Pipeline] } 00:42:37.575 [Pipeline] // catchError 00:42:37.583 [Pipeline] stage 00:42:37.585 [Pipeline] { (Epilogue) 00:42:37.598 [Pipeline] sh 00:42:37.878 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:55.978 [Pipeline] catchError 00:42:55.979 [Pipeline] { 00:42:55.993 [Pipeline] sh 00:42:56.271 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:56.529 Artifacts sizes are good 00:42:56.537 [Pipeline] } 00:42:56.552 [Pipeline] // catchError 00:42:56.562 [Pipeline] archiveArtifacts 00:42:56.568 Archiving artifacts 00:42:56.919 [Pipeline] cleanWs 00:42:56.931 [WS-CLEANUP] Deleting project workspace... 00:42:56.931 [WS-CLEANUP] Deferred wipeout is used... 00:42:56.937 [WS-CLEANUP] done 00:42:56.939 [Pipeline] } 00:42:56.957 [Pipeline] // stage 00:42:56.962 [Pipeline] } 00:42:56.977 [Pipeline] // node 00:42:56.982 [Pipeline] End of Pipeline 00:42:57.010 Finished: SUCCESS